Technology

Apple Planning to Scan iPhones for Child Abuse Photos: The Flipside

Published

on

Earlier this week, Apple announced the newest of features under “protection of children.” Apple will be scanning every devices’ image library for child abuse imagery. From the surface, this initiative looks both commendable and sensible. But many fear that behind the noble cause, Apple might be trying to intervene in the users’ privacy, wide opening Pandora’s box of surveillance and privacy issues.

But, before jumping off to any conclusion let’s understand: What exactly does the new feature state? On what basis will Apple decide whether an image fits the child abuse definition? Will this new feature actually help in tackling child abuse, or is there an unforeseeable downfall?

Apple: The New Protection of Children Feature

Child sexual abuse has been a steaming concern for a long time, and with the rise of technology, it is spiraling out of control. According to NCMES, there was an estimate of 18.4 million cases of child sexual abuse imagery in 2018 alone. So, undoubtedly, technological superpowers do have the social responsibility of curbing the terrible abuse from their platforms, and Apple’s newest feature aims to do just that.

Apple unveiled a set of new tools on Thursday, 5th Augscannscanning iPhones and for explicit content, child pornography images, and text messages. The aggressive feature is proposed to thwart child predators and prey on pedophiles with the use of technology.

How Will it Work?

When used on a child’s mobile, any explicit photo received of their device via message will be blurred, with a warning of the content being subjected to age restrictions. If the child opens the picture anyway, then depending upon the age of the child, the parent or the guardian will be notified. This process will go both ways for receiving and sending.

Till this section, things seem justified, but further down the line, the water gets murky in the second part of the feature, where Apple will effectively scan every user’s iCloud photos, searching for any evidence of child sexual abuse material, also known as CSAM.

When the feature will be rolled out, Apple will be scanning and detecting every CSAM image stored in cloud photos. Suppose anything sketchy is found and the number of the material exceeds Apple’s marked threshold. In that case, the suspected images will be sent for manual review. Apple deployed employees will determine if or not the images and the users need to be reported. Afterward, if the employee decides the photo is truly inappropriate, it will be reported to the National Center for Missing & Exploited Children and inform law enforcement authorities.

For ensuring user’s privacy, Apple has cleared that all the scanning will happen on everyone’s devices. iCloud photos have to be enabled in order to get your device scanned. So, before your images are uploaded to the cloud, the Apple device will be scanning all the photos against the explicit images on the national database to find a match.

In short, the scan will happen on the device. Apple will only be accessible as Apple is not disclosing the pre-determined internal threshold, which is justifiable, considering the malicious users.

The Unforeseeable Downfall for Apple Users?

Without a shadow of a doubt, the new addition of Apple is appreciable. Child abuse in any form must not be tolerated and should be punished by the laws. Proposed with the noble motive of strengthening law enforcement, the flip side of the new feature can open gates to more demand of user data from the government.

The genuine concern is about the users’ privacy. The suspicion of the technology used for the morally inarguable subject of child security being a gateway to users’ privacy intervention is in the air. Though according to the current formula, the images have to be matched with the NCMES database. But who can guarantee that this way of scanning cannot be connected and implemented on different databases in the coming future?

The current debate is that; Is it okay for Apple to be the digital hall monitor or a straight vigilante for its user? Who is to say that Apple will not expand these features as they see fit?

The conclusion

Once these doors of privacy intervention open, it will be very tough to close them. But what can we do in the catch 22 situation where on one hand the user has to compromise with its privacy or, on the other, debate against child protection which will be morally wrong and will look like they sure have something suspicious to hide. The way of using such a sensitive topic of child abuse and silence the opinion that is opposite is naive.

The arguments presented here are not against child security but about the fact that acceptance of such features could lead to more server privacy issues and could open wide doors to scanning less nefarious activities, significantly when fallen into the wrong hands.

Apple is not introducing this feature for the first time in the market. Google and Facebook are two of the most famous names that are already utilizing this technology for the greater good. But the two market leaders don’t promise the level of privacy as offered by Apple. And this new technology would act as a backdoor into every Apple device, which has a high potential of growing wider and wider.

Trending

Exit mobile version