The problem with AI

When we think about what's needed to revolutionize technology and humanity, one might think of the creation of sentient systems or highly intelligent robots whom we may talk to and collaborate with. But if we look at AI today, it doesn't seem to be heading that way. To me, it feels very narrow — one needs to implement a specific program for every problem that AI can solve. Not only that but the current AI systems we have require a ton of computational power.

An idea that has been cooking in my mind for quite some years now is an animalistic approach to this problem — to manually teach and raise artificial minds. In other words, I've been thinking about creating and raising digital babies. If we take a look, most animals that might be considered intelligent learn from their parents example and spend a few years doing so, like most primates and the American Crow.

The reason I bring this up is because the computational power needed for today’s AI systems is mainly for the training part. And if we think about how animals learn complex activities and how they gain their understanding of the world we can clearly see that it’s not by crunching big datasets.

Humans for example, have to learn through experience, through the example of their parents and others around them. And it takes time, lots of practice, and involvement from the parents.

I mean, most people start calling themselves professionals in their adult life about 20 years after they’re born. We need to learn the basics of movement and first-hand physics in our first few years. Then, we start learning language and it takes us quite a few years to become somewhat eloquent. Then, it takes a few more years to understand social dynamics around us and a few more years to master some skills like programming or playing the piano.

So, instead of designing AI algorithms that solve specific problems, we need to design a cognitive architecture that would allow for an artificial mind to be raised by others.

Defining our requirements

Before we can design a cognitive architecture, we need to define exactly what is it that we want from an AMI (artificial mind) and how exactly will it look like to use it. Just like when designing software, we need to think about the user experience, the user goals, the way the user communicates, and the technical limitations.

What I personally want from an artificial mind is for it to be someone that helps me orchestrate my digital task and customize the software I use through something that would look like a chat UI.

I want to be able to tell my AMI “let’s work on X project“ and have set of software tools start up with the correct state.

Then, I want to be able to tell it "hide everything and open the book where I left off yesterday. I only want to receive urgent messages” and expect that to happen the way I expect it to happen.

How would the AMI know what I expect when I say those things? It would know because I would have manually taught it to, just like how you would do it with a human personal assistant.

There's a problem with this though. Our computers and software don’t give us the flexibility to control our tools in the way we would need to. And even though Alexander Obenauer is currently working and exploring solutions for this problem. It’s not a reality today.

In the mean time, I could give my AMI access to the same peripherals as I have (screen, mouse, keyboard), but it would probably require more processing power and it would be harder to create a cognitive architecture since we’re getting into visual computing just to process all those pixels in my screen.

To create a single cognitive architecture that allows for multiple mediums of interaction, we need to create a universal interface between AMIs and our world. Take this analogy into consideration: we use our body to interact with the world around us — it’s the interface with the physical world. And we use our fingers and eyes to access the digital world through the interface that is a computer. So, what would the body of an AMI be and how does that affect the cognitive architecture? How does the world they live in work so that the objects that exist there serve as interfaces for controlling our human tools? Before I can explain how exactly we are to raise artificial minds, I’ll need to explain a few more ideas about the creation of a digital world where AMIs can live in.