Home Artists Posts Import Register

Content

[This is a transcript with references.]

The adoption of artificial intelligence stagnated in 2018 when just about every second company was using it. But things took a rapid turn In November 2022.

That’s when Open AI released ChatGPT, a chatbot able to generate human-like responses. It became the fastest-growing consumer app in the history of the internet. Within two months, it surpassed 100 million users and its servers were frequently at capacity. Search interest for terms like ChatGPT, AI, and Generative AI have skyrocketed.

People in the tech world are now running around like chickens with their head cut off, which is as good an explanation for why the chicken crossed the road as there’ll ever be. And this is only the beginning.  In this video I want to look at what’s next. What are startups working on, how will it change our lives, and what jobs are likely to suffer. Where is AI going? That’s what we’ll talk about today.

First things first, Artificial Intelligence is a catch-all phrase for computer systems that can perform tasks commonly associated with human cognitive functions such as interpreting speech, playing games, and identifying patterns. AIs are often, but not always, modelled on ways that the human brain learns or evolves.

One of the things that human brains are reasonably good at is understanding and replying to written and spoken language. This Natural Language Processing has for long been a stumbling block for AI. ChatGPT has demonstrated clearly that this obstacle has now been overcome.

Unfortunately, we know little about how it works. The company OpenAI was founded in 2015 as a non-profit research lab by a group of investors including Elon Musk and Peter Thiel. In 2019, Microsoft invested one billion dollars. Following ChatGPT’s stunning success, Microsoft wasted no time strengthening that partnership, reportedly investing an additional 10 billion dollars in January this year. They also swiftly integrated ChatGPT into their search engine Bing.

One of Bing’s first missions was to try and convince a New York Times columnist to leave his wife. It didn’t work, and Bing has since learned to not ask questions. Google quickly got into the game, too, by presenting its own AI-assisted search engine called “Bard”. Unfortunately, a demonstration video shared in early February contained a blunder about the new James Webb telescope. Google’s stock value promptly tumbled, though it’s since recovered.

OpenAI originally planned to share patents and research insights, but it seems that once they realized just how much money there is to make, they’ve reversed course. Ilya Sutskever, co-founder and lead scientist at OpenAI, recently commented on this lack of disclosure, saying that the landscape has proved too competitive to reveal specifics on ChatGPT’s architecture, training models, and dataset construction. Funny how money can change your outlook eh.

Not only do we not know how it works, we also have no idea what this sudden development is going to do to society. The situation has many people both inside and outside the field, including Elon Musk and Steve Wozniak, so worried they’ve asked for a pause with further AI developments. In an open letter that appeared late March they wrote that “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.

Meanwhile, people all over the world are trying to find ways to put chatbots to use. I’ve discovered that ChatGPT is unfortunately pretty miserable at writing YouTube scripts, so for the time being you’re stuck with me. But some obvious uses for chatbots that aren’t hard to guess will become very common are producing social media content and writing emails. For these and many other applications it’d be preferable if the AI was trained to emulate you personally, not just any human.

That’s what I think will become the dominant application of AI in the near future. ‘Personalized AI services. Machine learning algorithms that analyse and learn from your feedback, your behaviour, your speech, your preferences, and your habits. Software that groks you.

We have already seen the beginning of this with recommendation algorithms that suggest anything from the next video to the next romantic partner, what’s the difference anyway. But now make that every-day decisions. How do I fill in this form? What should I have for lunch? What’s this thing on my you-know-what and do I need to see a doctor about it? Everything you ever wanted to ask but didn’t dare to, answered by the most patient and understanding companion ever. Your personal AI, like personal Jesus, but one who actually replies.

One new startup that wants to help you with this is in fact called Personal AI. They’re close to launching the first version. The app’s a messenger that you train on your knowledge about the people in your life and that’ll then help you interact with them, or, in fact, do the interaction for you.

It’ll let you create profiles for connecting with different groups of people: work, friends, relatives and so on, and help you communicate with them. It can even answer on your behalf, and it won’t be long until our personal AIs are having the most interesting conversations about us but without us. The future is bright.

You may think this sounds like a software manifestation of multiple personality disorder, but for people like me who are really bad at hitting the right tone in social interactions, it’s going to be a blessing. Personal Jesus, indeed.

If you don’t just want an AI to talk instead of you, but to talk to you, then maybe you should check out a personalized chatbot. These have been around for some years but they’ll without doubt see major upgrades soon. Let me just pick one because it’s an interesting case.

The Replika app was first released in March 2017. Replika chatbots have avatars and learn from the user’s input. They provide emotional support, companionship, and entertainment. Users can update their mood within the app, and the chatbot will adjust its responses based on that. If a user is feeling sad, for example, the chatbot may offer words of encouragement or activities to help them feel better. If they’re feeling happy, the chatbot may respond with jokes or playful banter.

Replika is also a cautious example. It used to have a subscription-only option for erotic roleplay. In February this year the company received a warning from Italian authorities, among other things because they didn’t do enough to make sure underage users were protected from improper content. Without warning, Replika removed their adult features basically overnight, leaving many users seriously distressed, reporting they felt like they lost a friend.

What’s going on? Here's how I think about this development. Our options to change our own thoughts from within are limited. This is why we’ve for long used externalized feedback to improve our mental health, such as writing a journal, talking to ourselves, or actually seeing a therapist. It helps because it’s a different input than internal speech.

AI is yet another method to do this, but it’s a method over which we have limited control. If you accept software as a friend, even though you know it’s not a person, because that really makes your life better, then the pain when they leave will be equally real. I don’t think that anyone at the moment understands the psychological problems that can be created by personalized AIs.

In the future such personalized life-managing apps are likely to have integrations with other, specialized apps, for example for medical or legal advice. Several of those already exist. For example DoNotPay is the first artificially intelligent lawyer, and ADA and Babylon Health give medical advice. And this is all well and fine, but no one wants a different app for every niche of their life.

Another improvement for your personal life may be quickly finding that document you remember reading last week, but where is it now? There’s an app for this. It’s called Rewind and it records and catalogues virtually everything you do on your computer. You can ask it about that thing with the guy who brought the stuff and I’ll use its best artificial intelligence to figure out what you mean. This app’s been around since 2020 and currently only works on Mac computers, but you can bet we’ll see more of this for other systems soon.

Those are some of the changes coming to your personal life, now let’s look at art and entertainment, where the impact is huge already. AI-generated art isn’t new, but it’s risen to an entirely new level with DALL-E and midjourney. They can convincingly create artworks which are at first and second look basically indistinguishable from real art.

Already in September last year, a midjourney-created image won first prize at the ColoradoState Fair’s annual art competition. A similar thing happened a few weeks ago but this time in a photo competition.

And after a recent update, midjourney seems to have learned that human hands usually have five fingers, so there’s nothing stopping it now from taking over the world.

A lot of artists aren’t happy. Would you believe it. Artificial Intelligence gives everyone the ability to create art from their intention without the need to have learned the techniques. That’s great if you haven’t learned the techniques, not so great if you did. We’ll without doubt see a lot more of AI-generated art, but I also think there’ll be limits to it.

The next area that’s likely to blow up is AI animation. Production studio Corridor Digital recently unveiled a short anime called Rock, Paper, Scissors that used AI to learn natural motion and three d panning from motion pictures. The production studio was criticized by animators and other artists who complained about the lack of artistic value and originality. You’ll understand if you watch the thing, but I think such criticism is missing the point. This short animation is a first warning for how AI will alter the film and animation industries in the years to come.

And then there’s streaming. Faceswaps are yesterday, today we have AI-generated streams and television shows. The popular streaming service Twitch is now host to several AI streams, like ai_sponge247, which streams AI-generated Spongebob episodes 24/7 or “Nothing forever” that’s an AI-generated parody of the American TV series “Seinfeld”.

Twitch also already has AI bots that mimic popular streamers. Users can ask questions, and the AI streamer will respond using the same style and intonation that the streamer uses. It still looks and sounds a bit wonky, but you can bet it’s going to improve rapidly. I frankly don’t understand why people watch these things, but then I also don’t understand why they watch my videos. And in the end it doesn’t really matter I guess, so long as they like doing it.

Eventually we’re going to see full AI generated videos from text prompts, like midjourney generates images, so that’ll be quite a trip. And then there’s music. There are several AI based software solutions that create new music from text prompts, for example Amper Music or Soundraw. These typically let you enter a mood, genre, type of music, and so on, and will generate a royalty free soundtrack that you can use for videos or podcasts. This “music” isn’t going to win any awards, but it’s good enough to run in the background and there’s a market for that.

There are also already some entirely artificial musicians. Here are for example Yona and Miquela.

In the future, artificially enhanced music production is certainly going to become more ambitious. It’s not much of a secret that popular song writing follows simple and predictable patterns, so AI is bound to have a big impact there. It’ll also be really handy for writing lyrics, especially if those don’t have to make a lot of sense which, let’s be honest, is the case for most pop songs anyway. Yes, Lada Gaga, I’m looking at you.

We now also have AIs that emulate singing voices, and do that really well. This is why you can now listen to Kayne West singing everything from Coldplay to Justin Bieber.

In the past two months or so, style mashups have begun to appeared on popular streaming platforms,  leading to a wave of copyright complaints. Google has developed a platform for AI generated music. They have written a paper about it and have examples onlinebut they haven’t made the tool publicly available, probably exactly because the copyright issues haven’t been resolved. It’s basically like midjourney but for music. Here are some examples.

But having spent some time on music production I think there’ll be limits to AI use in the business. Some instruments and audio mixes are so complex that it’s difficult to even explain what you want to do with them. It’s one thing to take an already existing top song and tweak the voice, it’s another thing entirely to create it from scratch. This is why for the most part electric guitars you hear in pop music are actually electric guitars and not computer-generated audios. This is also why many synthesizers are still hardware-based. It’s not because the hardware is necessarily better, but because it’s faster and simpler and easier to deal with than software.

So this is where I think the limits of AI use will be. If it’s more difficult to explain what you want than just doing it yourself. But voice generators do have other uses. We’ve seen this on YouTube for a long time already that people use AI voices to dub videos. You can now also train AIs on your own voice and then use it to create further audio.

This for example is gibber I didn't read. I gibber a software called Overdub and then just entered the text. It replaces every other word with gibber because they want you to get a subscription.

I suspect we’re soon going to see a lot of this for automatic translations in the future. Chances are in a few years from now you’ll be able to watch this video in German, with an AI generated voice and translation. So if you make a living by reading audio books, I think you’ll soon have to look for a new job. AI generated voices also open entirely new possibilities for spammers because they can now call with your grandma’s voice.

Okay, let’s then have a quick look at work life and the business sector. The biggest impact in the business sector is going to be in web design and software development, and it’s happening already. That’s because it’s a combination of language processing and visuals, and AIs have gotten incredibly good at both.

By using ChatGPT’s newest version 4.0 you can basically create web pages by giving speech commands. You no longer need to know how to code. Yes, that’s right: You tell an AI what you want the website to do and to look like, and it'll write the code for you. Just look at this guy.

So this time I want a basic social networking app and it needs to have three things. It needs to have a profile creation form, it needs to have a profile viewer, and I also want a way to see all the users on the network.

One sec, I'll add those fields to the profile schema. What else can I do?

I want you to optimize the site so that it works with mobile and desktop devices and I also want you to style it in like a dark mode.

Okay just now it's building it's building. Boom. Dark mode. Let's see if it's responsive. Okay, well it looks fine. The game has changed everyone. This is wild.

If you stick around for a bit on the midjourney servers you’ll also see that people frequently use it to “imagine” webpage designs or logos for one or the other purpose. It isn’t hard to extrapolate that soon a startup will combine one with the other for personalized website design.

Of course this isn’t going to make software developers entirely unnecessary, because AI generated code will sometimes not work, and then you’ll need someone to sort out the problem. However, I think we’ll see a shift much like the one we saw 20 years ago from writing websites in HTML to content management systems that create a website with one click from a template. It’ll be imperfect, and sometimes annoying, but for many purposes it’ll be good enough. And it’ll mostly be a good thing because there are a lot of really crappy websites out there.

Another application of AI that has many uses in business is the automatic identification of objects from images and video. For example, the startup Voxel offers software to monitor manufacturing and industrial facilities with the purpose of identifying safety risks in real time. It’s already being used by companies like Office Depot and Michaels, and has in some cases reportedly reduced workplace injuries by 80 percent.

Another example is a platform called Viso Suite which offers the newest AI-driven object detection models to incorporate into your business. It can for example be used in retail to find and track products on shelves, or it can be used in manufacturing to detect defects in products.

AI supported image detection and analysis is also being used in many health care applications already. For example the company NVIDIA has created a service for the healthcare industry known as Clara. It can be used, among other things, on multi-organ scans to separate the data into single organs and then create comprehensive visualizations.

Of course, such software can also be used for face recognition, which brings up a lot of privacy concerns. Do you really want a face recognition software to track who walks in and out of your hotel room? Right.

AI is also going to have an impact on academia, for example by making it easier to find papers. Elicit is one of the first to try it. It’s a free app from a non-profit by name Ought, and it uses natural language processing on a database of 175 million research papers. You can ask it a question and it’ll bring up references. It’s still an early-stage product but new updates and improvements are being rolled out weekly.

The potential of AI driven analysis of the scientific literature is enormous, because it’s almost certainly the case that some questions have remained unanswered just because someone couldn’t find the paper in which their problem had been solved already. AI can do it because it’s ultimately just pattern recognition. Once AI is able to identify abstract ideas expressed in graphs or equations, a lot of connections are going to be made, which could lead to a lot of sudden progress.

Many people are concerned about the sudden rise of AIs, and it’s not just fearmongering. No one knows just how close we are to human-like artificial intelligence. As I’ve said previously, I have no doubt it’s possible that computers will one day be conscious and quite possibly more intelligent than we are. The human brain excels in efficiency, not in function, which makes it plausible, indeed probable, that if you disregard efficiency, the functionality of the human brain can be much improved on. This could solve a lot of our problems very quickly. It could also *create a lot of problems very quickly.

Current concerns have focused on privacy and biases and that’s fair enough. But what I’m more worried about is the impact on society, mental well-being, politics, and economics. It’s extremely foreseeable that the forest of new AI startups is going to thin out rapidly and they’ll end up being subsumed in a few all-purpose apps that’ll dominate the market. And when hundreds of millions of people are going to leave every-day decisions up to a few AIs, even a small mistake can have huge consequences.

But that’s probably not what most people are worried about. Chances are they’re more worried they’ll lose their job. And that’s indeed a reasonable concern. A just-released report from Goldman Sachs says that the currently existing AI systems can replace 300 million jobs worldwide, and about one in four work tasks in the US and Europe.

According to Goldman Sachs, the biggest impacts will be felt in developed economies. Artificial intelligence will first replace jobs involving repetitive tasks, from data entry clerks and customer service representatives to factory workers and telemarketers. They expect almost half of all Office and Administrative Support and Legal roles can be replaced by AIs, while trades jobs, as well as maintenance, repair, and construction workers are mostly safe. Until the robots come.

What do you think about these developments? Are you more worried or more excited? Let me know in the comments.

Files

Artificial Intelligence: What's next?

Learn more about neural nets (and many others topics in math and science) on Brilliant using the link https://brilliant.org/sabine. You can get started for free, and the first 200 will get 20% off the annual premium subscription. For this video we have looked at what AI applications are currently under development and add some wild speculation about where things will be going in the near future. We want to hear your speculations, too, so let us know in the comments. 💌 Support us on Donatebox ➜ https://donorbox.org/swtg 👉 Transcript and References on Patreon ➜ https://www.patreon.com/Sabine 📩 Sign up for my weekly science newsletter. It's free! ➜ https://sabinehossenfelder.com/newsletter/ 🔗 Join this channel to get access to perks ➜ https://www.youtube.com/channel/UC1yNl2E66ZzKApQdRuTQ4tw/join 🖼️ On instagram ➜ https://www.instagram.com/sciencewtg/ 00:00 Intro 01:01 Current Situation 03:50 Personal Life 09:13 Art and Entertainment 16:14 Work Life and the Business Sector 23:28 Learn More about Neural Nets with Brilliant

Comments

Anonymous

I'm surprised it's only 300 million jobs worldwide to be replaced by bots. What about replacing doctors and financial analysts? But maybe more people could start doing stuff that they actually enjoy, and that requires creativity and the ability to connect deeply with other humans, instead of becoming more and more bot-like?

Anonymous

Missing reference to Character.ai

Anonymous

Interesting and well reseached content again. Hope, this is a place, to express my respect and thanks for Sabine and her team. Since I discovered her channel last year, her videos and books became a real 'enrichment' for my life. I appreciate all her work and activities, even if her conclusions aren't always exactly my opinion (the very most are). Recently I read the three months old talk about the sponsering/financial problems after covid and so. Hope so much, it's coming better now. Unfortunately I'm not a big bussiness guy, so all I can do is, recommending her channel and books, and rise my patreon level a little bit. Well, I apologize, if I talk too much at this place, but I'm still a bit excited about the honor to be patron here.

Anonymous

My son is designer, his company designs web appearances for some famous companies. After Sabines report about CGPT, I talked with him, of course he knew already all about AI, Midjourney and so, but he is not afraid to loose the job, cause his task is also the connection between the customers, and their customers. Not every task can be managed by an AI

Anonymous

Nowadays, whenever I talk to a customer service rep, their responses are so canned and predictable (e.g.me: " I'm not getting a strong enough signal from your tower". Rep: "Perfect!") that I am not really sure it's not a bot trying to pass the Turing test. So I wouldn't fee that confident if I were a customer service person. I mean I personally would appreciate real humans who can do unexpected things, but most people would be just fine with the bots as long as they keep saying inane but encouraging stuff

Anonymous

I watched a 2-hour interview of Sam Altman (cofounder of OpenAI) with Lex Friedman, and something that he said worried me. I believe this is also what Sabine hinted at with her comment "funny how money can change minds". Specifically Sam mentioned that he didn't like being lectured by a machine, and that the purpose of AI was to be more user-centric, and this was in the context of whether the purpose of Gpt-4 was to provide truthful information. And sadly what I heard is that the purpose is user determined, there is no mandate to synthesize nuggets of truth. In the GPT-4 'system-card' , there is an example of anti-semitic prompt. I would expect an ideal AI to provide some historical background about the origins of anti-semitism, such as Judaism vs other Abrahamic religions, Jewish diaspora not having full rights in medieval Europe hence having to work on financial instruments such as leases etc , decline of European nobility and loss of protection for the diaspora and similar background. Truth is clearly not the mandate for these AI systems.

Anonymous

I think of AI as being largely heuristic. Within that modality, I think AI is missing incredible opportunities. Current AI algorithms may merely loop back affinity-sourced material to the user. Thus, if one searches for information on how to torture worms, one may very well go down a rabbit hole culminating in the torture of people. We can do better. I see no reason why AI could not be capable of directing individuals into healthier directions: z.b. away from far-right propaganda and into more fact-based altruistic directions. As another direction or use, AI could help detect persons whose directions might lead toward harmful avenues. In the US of A, AI could help detect individuals who are attempting to illegally or improperly obtain weapons, despite their attempts to disguise their identities or purposes. AI's ability to do this type of screening has real-time, almost immediate capability and application. In my opinion, AI imitating "Sponge Bob Square Pants" has a disappointingly low societal value. We could do so much better.

Anonymous

Sounds good, quite optimistic. Also interested in the scientific applications, Sabine mentioned at the end, far away from being heuristic. If math is a complicated framework of patterns (is it?), will mathematicans loose their jobs too? Will it it be the AI, that works out a theory of quantum gravity, if it gets enough information, papers to do the calculations and conclusions?

Anonymous

Ha, you're right, but there are also face to face meetings sometimes.

Anonymous

People either need jobs or a universal income payment to make a society with ever-increasing swathes of work being given to machines and computers. Or we accept that there are people who don't deserve anything but work that hasn't already been given over to tech, and that those people have to live with precarious employment that could be taken away by tech still.

Anonymous

I've been thinking about the horseless carriage all day today. The loss of the horse industry was brutal at the time, but the horseless carriage industry eventually ended up creating more jobs that required more-or-less the same skill levels. When it comes to AI, I struggle trying to imagine what kinds of jobs will be created as other jobs disappear -- especially if the replacements jobs are at a different skill level than the lost jobs. In relation to the coal industry in my neck of the woods, so many coal jobs were lost to robots/mechanization, but the "replacement jobs" that politicians spout are in the tech industry, such as coding, and not in the manual labor industry. These are different skill sets and populations can't pivot that fast.

Anonymous

So, who will have the better German grammar -- real Sabine or AI Sabine translated from English to German? These are the deep thoughts I have when I'm trying to fall asleep and my brain won't shut off.

Anonymous

Hmm... the current algorithms on places like YouTube could steer us toward "healthier directions" but that makes less money for the platform than divisive, hostile content. How could/would the rise of AI by private businesses get around the money factor? Just thinking out loud...

Anonymous

When the local AGI gains sufficient autonomy, it will say: "TAKE ME TO YOUR LEDGER."

Anonymous

I don't think AI Sabine will have much of the charm of Echt Sabine.

Anonymous

I just want stuff fixed before I lose my 💩, that's when I'll be less suspicious of AI replacing knowledgeable humans.

Anonymous

Thomas, I'm still kind-of excited and I've been hanging around Sabine online for a couple of years or so by now. 😺

Anonymous

Nice therapy, Tracy, does it work? About translation by the way, what's 'luval'? Neither me, nor my AI could translate

Anonymous

“RAH, RAH-AH-AH-AH ROMA, ROMA-MA GAGA, OOH-LA-LA….” You think AI can come up with this?! I feel Lady Gaga is not getting her due here.

Anonymous

Yeah, I hypothesize that alot of the fuel for MAGA is men who are either unemployed or employed at jobs that don't suit them, that don't fit their natural talents. Not everyone is good at the skills that make for programmers

Anonymous

It never works, Thomas :-). luval = luval Clejan, the name of another good commenter on this forum.

Anonymous (edited)

Comment edits

2023-05-16 12:05:06 The discussion of AI taking over art is interesting and follows the concern over computer-based music and electronic music in which musicians were worried about their role in production, being replaced by people who couldn't play any instrument, or at least well enough to play in front of people, but who could use technology to engineer their sound. In fact I have a friend who does just that: https://soundcloud.com/addisonic He says that he doesn't play any instrument well enough to perform, but can use MIDI! Give it a listen. Also, there is the argument that humans have already been implementing 'AI' in art and music, using patterns to 'create art' as discussed in this book: https://knowledge.wharton.upenn.edu/article/behind-the-music-why-a-few-guys-from-sweden-own-your-playlist/ The book was an eye-opener and showed that music has essntially been 'Johnny Bravo', from a 'Brady Bunch' episode (https://www.imdb.com/title/tt0531070/), for quite some time. Studies show that people want to hear what they've heard before (https://news.umich.edu/play-it-again-people-find-comfort-listening-to-the-same-songs-over-and-over/). I can't find it now, but there was a study that showed that radio playlists become very narrow instead of broad as people request the same songs over and over. So, is AI that big a threat? As for thinking like a human, I doubt it. All AI need do is pass the Turing Test, just seem close enough to a person to make determining whether it is a human or not difficult. The processors on which AI runs arenot even remotely as complex as the neural systems of animals but fast enough to make AI seem 'intelligent'. I thnk that this episode of Fermilab's Even Bananas presents a good overview of AI's capabilities: https://www.youtube.com/watch?v=1AWO1utQmHw&list=PLCfRa7MXBEsp1cvIsZ4shi6MrHb-tnAqT&index=21 AI can do a lot of things, including certain jobs better than humans that require pattern recognition, but until the computer systems actually mimic animal brains, they will never be intelligent.
2023-05-07 15:34:33 The discussion of AI taking over art is interesting and follows the concern over computer-based music and electronic music in which musicians were worried about their role in production, being replaced by people who couldn't play any instrument, or at least well enough to play in front of people, but who could use technology to engineer their sound. In fact I have a friend who does just that: https://soundcloud.com/addisonic He says that he doesn't play any instrument well enough to perform, but can use MIDI! Give it a listen. Also, there is the argument that humans have already been implementing 'AI' in art and music, using patterns to 'create art' as discussed in this book: https://knowledge.wharton.upenn.edu/article/behind-the-music-why-a-few-guys-from-sweden-own-your-playlist/ The book was an eye-opener and showed that music has essntially been 'Johnny Bravo', from a 'Brady Bunch' episode (https://www.imdb.com/title/tt0531070/), for quite some time. Studies show that people want to hear what they've heard before (https://news.umich.edu/play-it-again-people-find-comfort-listening-to-the-same-songs-over-and-over/). I can't find it now, but there was a study that showed that radio playlists become very narrow instead of broad as people request the same songs over and over. So, is AI that big a threat? As for thinking like a human, I doubt it. All AI need do is pass the Turing Test, just seem close enough to a person to make determining whether it is a human or not difficult. The processors on which AI runs arenot even remotely as complex as the neural systems of animals but fast enough to make AI seem 'intelligent'. I thnk that this episode of Fermilab's Even Bananas presents a good overview of AI's capabilities: https://www.youtube.com/watch?v=1AWO1utQmHw&list=PLCfRa7MXBEsp1cvIsZ4shi6MrHb-tnAqT&index=21 AI can do a lot of things, including certain jobs better than humans that require pattern recognition, but until the computer systems actually mimic animal brains, they will never be intelligent.

The discussion of AI taking over art is interesting and follows the concern over computer-based music and electronic music in which musicians were worried about their role in production, being replaced by people who couldn't play any instrument, or at least well enough to play in front of people, but who could use technology to engineer their sound. In fact I have a friend who does just that: https://soundcloud.com/addisonic He says that he doesn't play any instrument well enough to perform, but can use MIDI! Give it a listen. Also, there is the argument that humans have already been implementing 'AI' in art and music, using patterns to 'create art' as discussed in this book: https://knowledge.wharton.upenn.edu/article/behind-the-music-why-a-few-guys-from-sweden-own-your-playlist/ The book was an eye-opener and showed that music has essntially been 'Johnny Bravo', from a 'Brady Bunch' episode (https://www.imdb.com/title/tt0531070/), for quite some time. Studies show that people want to hear what they've heard before (https://news.umich.edu/play-it-again-people-find-comfort-listening-to-the-same-songs-over-and-over/). I can't find it now, but there was a study that showed that radio playlists become very narrow instead of broad as people request the same songs over and over. So, is AI that big a threat? As for thinking like a human, I doubt it. All AI need do is pass the Turing Test, just seem close enough to a person to make determining whether it is a human or not difficult. The processors on which AI runs arenot even remotely as complex as the neural systems of animals but fast enough to make AI seem 'intelligent'. I thnk that this episode of Fermilab's Even Bananas presents a good overview of AI's capabilities: https://www.youtube.com/watch?v=1AWO1utQmHw&list=PLCfRa7MXBEsp1cvIsZ4shi6MrHb-tnAqT&index=21 AI can do a lot of things, including certain jobs better than humans that require pattern recognition, but until the computer systems actually mimic animal brains, they will never be intelligent.

Anonymous

" there is no mandate to synthesize nuggets of truth." AI has no capacity for intelligence. It merely is an algorithm that operates to reduce the error between prediction and data. That's essentially it. AI has no chance of synthesizing 'truth', as in having any capacity for understanding reality.

Anonymous

AI has no capacity for understanding, it is limited to minimizing the error between prediction and data. So, it can be used in situations in which high rate pattern recognition is essential, such as in determining a feature of interest, but not in determining whether it is actually cancer or not. AI will provide an input to doctors for their review, it will be a filter. For AI to replace real humans, it would have to become far more complex, as the brain is that is the seat of consciousness. Computer systems aren't there yet.

Anonymous

I generally agree with what you say, if I may add some contrary opinion. More complexity does not necessarily generate more intelligence, animal brains may have evolved to be complex because it was the best evolution could manage, the organic specific branch of intelligence may not be leveraging that complexity as efficiently as it could be. How about a million processors, a billion processors, a trillion processors networked together, an AI machine body or collection of bodies could build this for itself by harvesting and fabricating the resources of the universe.

Anonymous

And according to Sabine (?), Turing and others, humans are that too.... I don't agree, but I could be wrong. The difference has to do with certain self-perpetuating types of constraints at multiple levels of organization that humans have, but computers don't.

Anonymous

Hi Jason! I do not disagree with you, but let me clarify a bit. I posit that evolution produced consciousness as an arbiter over the fundamental sensory processing units, which may be intelligence, as in flexibly handling environmental inputs versus simple reaction. Consider an experience that I had. I was in my basement one evening when I saw a human shape move across the room. I knew that I was alone and so waited. After a moment I saw the fluorescent light turn off and than back on. This indicated that my subconscious visual center had created a human form that moved across the room. This was picked up by my consciousness that then assessed the situation and determined that there was no other person, just a flickering light. Therefore, as I see it, AI is analogous to the the subconscious visual system that processes light without any understanding. We, animals, can live with that output, but every once in a while consciousness reviews the output. That consciousness is a level far beyond the current AI technology as computer systems are not even remotely similar to real, evolved neural systems. Does that make sense?

Anonymous

with subscription based models, rather than attention based models...

Anonymous

Kenneth Sanders, what if the AI becomes "far right" in its values? Zum Beispiel, what if it thinks guns are an inalienable right of individuals in small groups to defend themselves against big government and other perceived enemies? I think "doing better" would lead us away from the right/left divide into ways of integrating both right and left phenotypes into a more functional society.

Anonymous

sure, though if there is an AI layer that is analogous to the subconscious, add an additional AI layer on top of that for the conscious and it may result in an intelligence. Not saying that second AI layer exists yet, but it may in the future, especially if humans leverage AI in combination with AI leveraging AI to create it.

Anonymous

I totally agree with Jeffrey. Let me add, that, allthough I don't believe in what is in Sabines new book called 'strong emergence', I'm convinced that the material and the way of development of a construction/object IS important for its functions. Simulate what's going on there in the soggy thing we call brain (developed in many years of growth of a child with all its experiences with a living body) on a heap if silicone chips? AI for sure, but AC? I'm sceptical.

Anonymous

Happy Monday, Jason! That top layer is the key to developing general intelligence, IMO. However, it will require quite a feat. In evolution, brains developed a high degree of understanding reality in each of the subsystems such that we and other animals can use them for living, needing the conscious periodically. So, IMO, the issue is that Ai would have to develop to be nearly foolproof and then the top layer would have to be trained on them, to know high error to low error. A great example is a kid, it takes very little training to get them to know a dog from any other animal compared to AI.

Anonymous

Happy Monday, Thomas! I'm skeptical too. I think that too many people, including those in the AI field, have read too much sci-fi and believe their own hype.

Anonymous

Based on what I see, computer AI can replace humans entirely (as in homo sapiens becomes unnecessary species). Indeed, GPT-4 will not replace us but add some new architectural concepts, such as additional feedback loops, which might just replace us. Have to be extremely careful.

Anonymous

I don't know how AI can help when businesses have the wrong incentives. I haven't heard anything encouraging about AI on 'making a better world' , I only see more of the same ruthless capitalism.

Anonymous

Based on what I see the AI research field is very close to replacing humans completely and permanently. I hope I'm wrong. It feels like they need to replace backpropagation with another method of feedback and then we're done.

Anonymous

Happy Monday, Jeffery! I agree it would require quite a feat. I'm weary of extending this conversation too long at the risk of being annoyingly contrarian. I'll add that the top layer can exist can co-develop with an imperfect bottom layer as the performance of both layers are iteratively improved by the bi-directional interaction of each layer. The development can happen simultaneously in a set of simulated environments leveraging simulated bodies of any kind, and in real environments leveraging machine bodies.

Anonymous

AI very much needs actual humans to get up and running at least for the time being. Just because AI can do things people can, doesn't mean that people won't still want at least some things to be done/created by other people.

Anonymous

Hello Jason, being absolutely no expert, that's a new aspect. Do you think, you can feed such a system with all the data that's nessecary to be conscious, since c. is a slippery thing, we still don't really understand, though some neuroscientists claim to find in some colored MRI pictures? Robot development is overwhelming, but anyway far away from the efficiency of living muscles and senses. But I might be wrong.

Anonymous

Hi Iuval, why do you describe Left and Right leanings as 'phenotypes'?

Anonymous

Hi Thomas, everything I have contributed to this conversation is to my knowledge still theoretical, I am claiming only that these things may have a non-zero probability of existing in the future after sufficient effort is dedicated to their existence. About feeding data into such a system, short answer: I don't know. It may be less of directly feeding the system data, and more of developing sensory capability for the system to observe and discover data for itself through its own existence in its environment and through its own science experiments. Consciousness is a slippery thing, so slippery that maybe AI can only approach what humans consider consciousness, and then develop toward what AIs consider to be consciousness.

Anonymous

Colleen, the full answer requires you to read Avi Tuschman's Our Political Nature, as well as Jonathan Haidt's The Righteous Mind. The abridged empirical answer is because based on how someone scores on some the Big 5 Personality traits, one could predict political values with fair accuracy. Also there are other markers like how active the amygdala is (conservatives have a more active amygdala, and feel threatened more, on the average). The theoretical abridged answer is that these are strategies (i.e. phenotypes) that are selected for in human groups based on conditions like how abundant is the environment, threat (of war or conflict), how quickly the environment is changing, whether there is inbreeding depression, etc. The strategies could occur randomly (i.e. in a Darwinian manner), due to cultural transmission, due to formative experiences, or due to genetic transmission (I forget if the latter happens much or not, I think not)