My current blog is epistem.ink. This one is here just for archival purposes.

AI and automation are at odds

Imagine for a second that we live in a land of perfect coordination. Instead of gradually introducing self-driving cars, we all agree on a set date where we stop driving and allow computers to do it for us.

How soon can we automate driving?

I'd say somewhere between 1980 and 1990. Why? For the exact same reason, we are able to automate trains, assembly lines and planes. Because automation is easy when there aren't a lot of human factors to control for, such as, when everything is automated.

If every car is fully automatic, following a standard protocol, able to communicate with nearby cars about their actions. Well, automation then becomes a hard but solvable problem using a few hundred conditionals. Road signs can be standardized to provide signals to the cars, instead of visual queues. Safety measures such as shutting down all nearby traffic if an accident was close to happening are easy to implement.

i - Natural enemies

There are cases where "AI" is used to automate a task that, previously, only a person could do (using the term to mean some heuristic for learning equations from data or simulations rather than writing them out by hand). But the vast majority of use-cases for AI, especially the flashy kind that behaves in a "human-like" way, might be just fixing coordination problems around automation.

Automation, almost by definition, implies adding rigour to a task. Giving programmatic control over all decisions, collecting all information, being able to simulate every scenario in advance.

The vast majority of tasks done by people do not require all the brainpower that's put into them, if you treat humans as perfect automatons. But a lot of fail-safes and redundancies are required to allow for human error in action and communication.

Not to mention the fact that many tasks get combined into one role. Ending up with people fulfilling 'hidden' roles besides their main occupation inside a business. Or simply with roles that are a combination of many loosely related tasks (think receptionist or bartender).

Thus we end up with rather complex jobs; Where something like AGI could be necessary to fully replace the person. But at the same time, these jobs can be trivially automated if we redefine the role and take some of the fuzziness out.

A bartender robot is beyond the dreams of contemporary engineering. A cocktail making machine, conveyer belt (or drone) that delivers drinks, ordering and paying through a tablet on your table... beyond trivial.

Indeed, part of the reason why many jobs that we currently have weren't automated, could be that they weren't widespread enough to be worth automating.

Automation takes a long time, in part because the "end-user" has to get used to it, and there will be an end-user somewhere. In part, because it's a tedious process, covering every single edge case and making sure nothing is lost on the way takes a lot of time and testing.

It may well be that the main reason many service jobs are not automated is that there weren't so many service jobs 20 or even 10 years ago, not because we need "AI" to automate any of them.

So paradoxically, we may observe a phenomenon where increased automation reduces the number of use-cases for AI.

ii - Some more examples

Let me provide another example for where AI could play a big role in business, answering phone calls to provide bookings and customer support.

This involves some rather incredible tech, from voice simulation to language generation and interpretation, enough to gather the necessary data and maintain the conversation.

But is this necessary? In most cases, it isn't. If a restaurant needs an "AI voice assistant" to handle reservations, and if they have the digitization to use it in place... guess what? They could just get an online booking page that's faster and simpler. The same goes for hospital or ticket reservations. The same goes for customer support, if your bot can help the customer, so would a properly designed online FAQ.


Or, take as a 3rd example, identity confirmation. Anything that goes into preventing identity theft, confirm identity and tracking people. Ethics of this aside, this is something that AI is being heavily employed in. From detecting fraudulent transactions to recognizing speech patterns when a customer calls, to tracking faces in a crowd.

This is probably a much more common problem if you live in the US than if you live in Europe, because, drumroll, the US doesn't have standardized IDs, most European countries do. Add a standardized digital ID that's needed for everything, add banking apps that require biometric authentication to confirm transactions, add phone and email confirmations for sensitive actions... and you could get rid of a lot of identity-related problems.

My bank could try to detect if a transaction in my account is fraudulent... or they could require my fingerprint confirmation on my phone for every single transaction I make with my card. Most banks are choosing to focus on the latter.


The holy grail of NLP is software-writing automation. And the bravest and boldest of companies claim they are working towards building app code from just text instructions or by looking at some mocks.

This is awesome except that, well, you can already kind of build apps without code. From WordPress to Wix, to Squarespace, to 1001 others providing similar services.

You have to have rather a niche needs, the kind that needs a programmer to help you figure out what you actually want, not only build it for you.

So the promise of AI coding is awesome, except for the fact that it's at least 10 years behind manual automation of the markets it wishes to capture.

iii - AI leads to "dumb" automation

Go back to the car example. Assume that by 2030 most cars are self-driving. But self-driving software is imperfect and people are people, so accidents still happen.

A law is passed banning people from driving and mandating all vehicles are automated within some safety standards.

Slowly, the safety standards are increased, which is possible because all vehicles are forced to follow them. In a way, this removes some of the autonomy of the self-driving box, limits or even replaces some "AI" functionality with mere if/else clauses.

Add to this collaboration on communication standards between car companies to make their lives easier and suddenly you no longer need all the fancy vision models to detect other cars. Maybe the government intervenes and changes road signs and traffic lights to be digital on highways... suddenly your vision apparatus is redundant anywhere but in small neighbourhoods, and so on.

By 2050 we have automated driving such that there are no casualties, and we realize that the remaining ensembles have only vestiges of what we called "AI" and is instead a bunch of if/else clauses.

This is an example of AI making itself useless for a given job. If AI is used because of coordination problems that stop us from changing user behaviour and/or automating all humans at once; Then its introduction might well be a good starting point for implementing "real" automation; Since it replaces the people are causing the coordination problems to begin with.

I'm still kind of fuzzy on this whole idea. I think there's a very real chance that in a few years from now, we'll notice a reduced need for AI, as the result of a bunch of "manual" automation.

I was always really bored by the idea of working on human-like AI, I think this gives me a new reason to be hedgy on that front.... so maybe there's some built-in bias to the above ideas I'm not noticing. Brains are fantastic at jumping through many hoops to reinforce already held beliefs. But then again, already held beliefs are generated by underlying observations which, one would expect, would lead to reinforcing ideas. So take this opinion with a bag of salt.

Published on: 2021-09-20










Reactions:

angry
loony
pls
sexy
straigt-face

twitter logo
Share this article on twitter
 linkedin logo
Share this article on linkedin
Fb logo
Share this article on facebook