AI, Privacy and Security (Part 1)


Is the genie out of the bottle?

AI has to be the topic of the year. Everyone is talking about it. Smarter people than me have written about it in great detail.

I want to reflect on several ideas pertaining to AI.

The first one is really a note about the need for caution. There are already numerous events where AI has acted nefariously, whether or not that was at the direct behest of human owners or an “accidental” consequence of other actions -it has resulted in bad things happening.

The second around the idea of security and whether it can truly be implemented within the AI environment. The learning aspect (we cannot predict its outcomes) of its algorithms means that it will change and do things that were not expected.

The need for caution

Everyone is talking about how the AI industry needs to be regulated. Some of the loudest proponents of regulation is the AI industry itself. This I think is a brilliant strategic move by them.

The industry is deliberately not self-regulating itself – why would it? there is a lot of money to be made by those in the lead.

Why is this a brilliant strategy?

  1. They know that no government will move to implement any significant way in the short to medium term. Governments most likely do not have the internal expertise to appreciate what is involved, what is at stake, and more importantly what would be the implications of any regulation to local industry and stifling innovation.
  2. It absolves them of any responsibility for what might go wrong. They see the negatives, they know they exist. “Don’t blame us – we warned you – we told you to do something”

The creators of AI are clearly indicating the need for caution while happily driving it forward as fast as they can.

A clear example of where AI can equally be used for bad as it can be used for good was the research article published in Nature. Dual use of artificial-intelligence-powered drug discovery The simple flip of a single digit from a 1 to a 0 in the code changed it from something looking for life saving drugs to something looking for life taking drugs (deadly poisons).

The military leading the charge (pun intended)

This dichotomy applies to the use of AI in military applications.

The military in most countries is investing in AI. Everything is pointing to AI now being the differentiation in battle. AI technology is being rapidly adopted as it is all too often proving to be vastly superior to humans; especially when winning a battle comes down to smarter and faster decisions such as a dog fight between fighter jets.

At the time of writing this – we are not quite there yet – we are not unleashing a squad of robots into a battlefield. The AI is not quite there, the robots are not quite there.

I feel that it is just a matter of time till this changes.

We still have an agreed general rule of thumb for “autonomous” machines that the “kill decision”  must be left to a human.  However, when the fog of war descends some time in the (very) near future we will see battle ready robots being deployed, and they will be carrying weapons.

So picture this.. An AI robot moving through a war zone. Detects a human, detects the human carrying a weapon, based on direction of travel and location there is, with 95% accuracy, an assumption that this is human about to attack the good guys position. The human controlling the robot presses the kill button making the robot open fire.

The result would be the same for real soldier – during a battle if you see an enemy holding a gun coming towards you shoot first, ask questions later.

That scenario is only a single digit flip away from setting the AI robots to automatically shot when the threshold of 95% is reached instead of waiting for humans. Then you simply send thousands of AI robots into an area with the same command – seek and destroy – and not tie up thousands of troops individually monitoring the robots waiting to hit the kill button if required.

But AI is a learning machine. If it starts to see children carrying weapons, or it starts to see children with combatants and therefore by association, it will start to learn that they are a risk, this will push up the assessing algorithm, it will  start to rate them as being greater chance of being enemy combatants.  This increases the children’s ranking into the – to be killed – rating. AI has no moral compass, so it now is killing children because they have reached the AI’s self-adjusted score of 95%.

Of course if you are a crazy dictator set on eliminating everyone you could just send them in with a kill everything command. As the certainty factor is just a number that needs to be changed. All you need is the money to buy the AI bots.

Rise of the Machines?

Of course we are a few years away from something like The Terminator being are real scenario.  But how many years?

Remember that the idea that you could have 20 flying drones in a small brief case was not possible ten years ago. Today you can buy them in toy stores.  Research is underway on “swarming” behaviour. Combine all of these changes and short films such as Slaughterbots has in 5-10 years moved from only existing in the science fiction world to now being feasible in the real world.

How does this impact of security as we know it?

These risks are real in the military context. It is only this year that we have started to discuss what AI will mean to us in our day to day lives. 

Currently the negative aspects of AI in our day to day lives are being fought from a Privacy angle.

The European Union is already taking steps to making laws around AI. Proposing bans on applications that pose an unacceptable risk – such as “social scoring“. Apps that gather, on mass, data relating to people, what they do and say, who they interact with.

The MEPs further banned biometric categorization, predictive policing and software that scrapes facial images from the internet to build databases like Clearview AI. AI systems for emotion recognition were also forbidden in law enforcement, border management, workplaces and educational institutions.”   (MEP – Members of the European Parliament)

From <https://iapp.org/news/a/europes-rulebook-for-artificial-intelligence-takes-shape/>

In Australia the application Clearview was found to breach Australia’s Privacy Act 1988.

The creation of massive data lakes of information combined with AI ability to correlate and see patterns in this data are quite frightening from a privacy angle.  Consider the combination of these data sets.

  • Shopping loyalty cards and tracking of your purchases – how healthy is your diet?. In fact they can track your purchase history by simply keeping records of a purchase against a hash of a credit card
  • Your movements via phone health apps and other GPS tracking ( or lack of healthy activities)
  • Your check in and general tagging of locations on social media
  • Your expenditure when in locations such as pubs and clubs
  • AI scanning photos to determine where you have been, what your hobbies and activities are, who you are with and their lifestyles

The Pros

Think of how useful all this data could be …15% of the world’s 7.9 billion people live if first world countries. Over 1 billion people. The predictive health benefits of a dataset of one billion people would be enormous. It would allow researchers to start investigating ways to prolong life through earlier detection of medical issues at a scale never seen before. The use of AI over this data set could save millions of lives in the years to come.

The Cons

Consider now that an insurance company has access to the same information and is using it to determine what your premiums would be. Looking into how healthy your lifestyle is? What are the indicators of a shorter life or illness? Or it may simply stop insuring you past a certain age because you are no longer profitable.

Consider now how a government may use this information, a mapping of who is talking to who for example.

All of this data and it negative uses and you start to see why this is all quite frighting from a privacy angle. We certainly don’t want to end up in a police state where pre-emptive policing is seen as acceptable rather than a harmful policy.

Part 2 (coming soon)

FUN FOOTNOTE?

While writing this article I went to get my annual eye check. At the end the optometrist ask me if I would like to in an AI scan where they (the AI system) would submit my retinal image for analysis. The scan could identify things such as Macular Degeneration. I opted in – as an experiment – to see what information I would receive.  Less than one hour after I would out of the optometrist I received an email with my results.

Who knows what the implications are of a private company now having a copy of those images – or the implication in the future for on my insurance premiums given what they may now be able to predict illnesses in advance and adjust or decline cover to suit them.

Some of the references used:


Leave a Reply