Who's Afraid of AI Agents? The Future of Automation, with Bob Ziroll

Bob Ziroll (00:00):
Be prepared that things are always going to change. You are likely not going to settle into one expertise when it comes to AI and then expect to coast on that for a really long time. I think to be a successful AI engineer, you'll need to always be up-to-date with the new stuff and be ready to just drop what you were doing and use the new thing.

Speaker 2 (00:18):

Jan Arsenovic (00:24):
That was Bob Ziroll, the Internet's favorite React teacher. Well, his latest course is about AI, and believe it or not, there is React in AI as well. This is the final installment of our rapid response series on how to become an AI engineer. In the previous three episodes, we have defined an AI engineer and demystify the tools they use. We talked about foundation models and making them your own through retrieval, augmented generation and fine-tuning. We also talked about all the different use cases for using AI in your projects and why ChatGPT brought a fundamental shift in how we perceive AI. Today Bob will teach us about AI agents. AI agents are the future of automation. An agent is an AI that can perceive its environment. What exactly does this mean? How to make an AI agent and will they eventually take over the world? Bob's course on AI agents and automations is a part of Scrimba's brand new AI engineer path. Let's dive in. And welcome, your host, Alex Booker, and our today's guest, Scrimba's head of education, Bob Ziroll.

Bob Ziroll (01:34):
I'm glad to be back. I feel like I'm one of the only people that have been on here twice, so it's quite a privilege.

Alex Booker (01:39):
Among the very few. That's right. I think last time we spoke it was more about your story learning to code. I know you did marketing in a past life and we spoke a bit about your pedagogy towards teaching and your journey with React and stuff like that as well. It was one of my favorite conversations today.

Bob Ziroll (01:56):
Oh, well, thanks. I'm sure you just say that to everybody. Yeah, it's been a minute. I remember being really excited when the podcast came out and since then I've still been working at Scrimba and had a chance to release a React router course and a new advanced React course and then lately of course, working on AI, which is what we're here to talk about.

Alex Booker (02:16):
It's interesting, the two areas you choose to teach on are React and front-end development, which is always moving forward it feels like, and then AI and AI engineering, which is also always moving forward but adds an even greater rates, I feel like.

Bob Ziroll (02:30):
Yeah, it was jarring. I knew that it would happen. I knew that when we started recording things would get obsolete quickly, but I didn't anticipate that even before we launched there would be certain things that we had to start updating already. So it's a fast moving world, faster than front-end development for sure.

Alex Booker (02:46):
And Scrimba's trying their best to teach these things and I know we have the AI path and all the teachers that are getting involved, and I've been speaking with each teacher these last few weeks as part of this series where we try and shed some light on what AI engineering is, where the opportunities are and what are some of the things that you can achieve that you couldn't before. What's your take on all this AI stuff happening in the front end space at the moment?

Bob Ziroll (03:10):
There's so many different aspects of AI that it's hard to put a finger on just one of them most. I think people listening to this will be familiar with tools like GitHub Copilot, and then I bet a lot of people have started using ChatGPT instead of going to Stack Overflow or even Google. That already is just making the road smoother for developers as they're trying to write their code. They have an assistant right there to help them do it and oftentimes even give them the code that they want. It's like, we joke about copy-pasting code from Stack Overflow and then having a bunch of irrelevant things in your code, but with ChatGPT, you don't always get the irrelevant part. You just get stuff that's using your variable names and using your function names and everything. It's crazy.

Alex Booker (03:58):
Yeah, it's literally tailored to the exact parameters you're expecting and it's going to slot into your code perfectly if you give it the right context.

Bob Ziroll (04:05):
Well, and that's the whole purpose of Copilot, right? It's not even copy paste. You just hit tab and it fills everything in for you, and currently you still have to make your own changes, but it does it.

Alex Booker (04:15):
Well, that's it. I think for so many of us, our exposure to this new wave of AI was ChatGPT basically, which people listening to this series will now know is based on the GPT Foundation model, which everybody has access to, but then it kind of creates a branch in my view because for developers, one branch is using tools like ChatGPT to become more productive code writers plus some debugging capabilities, by the way, that's pretty underrated. I think you can use these tools like ChatGPT to help with those things, but I suppose what an AI engineer is and what we've been talking about in the AI path, it is not using ChatGPT but actually using APIs to the GPT model that ChatGPT is based on to enable functionality in our applications that we didn't previously have the right tools to build.

Bob Ziroll (05:01):
Yeah, exactly. ChatGPT is a great example of the kind of thing you could build with the GPT models or really whatever models. We use OpenAI quite a bit in the AI engineer path, but we also talk about Hugging Face and the other open source models that are also quite capable.

Alex Booker (05:20):
I think I'm going to get into trouble soon because I always equate foundation models with GPT because it's easy for me to explain as something people recognize, but it's far from the only foundation model out there, and OpenAI are far from the only research lab participating in this space.

Bob Ziroll (05:36):
Yeah. I've noticed it's quite an arms race, just one will be ahead of the other any given day. Yeah, it's an interesting ecosystem.

Alex Booker (05:45):
So far on this series I spoke with Pat about what an AI engineer is. Tom gave us a fantastic introduction to foundation models, and then Guil in last week's episode, he taught us all about rag and embeddings and vector databases, or in other words, how to make a foundation model aware of an external knowledge base it didn't have access to when the model was trained. Your module in the AI path is all about something called agents, I believe. What are agents all about?

Bob Ziroll (06:13):
I think it's getting trickier and trickier to explain what an agent is because as AI becomes more capable, it's all just kind of evolving into an agent. The way that I like to think of it is that it's like a more awake or a more aware version of an AI model. The textbook definition would be something like an AI model that can perceive its environment and interact with it. So take that for what you will, but-

Alex Booker (06:40):
So like the Terminator.

Bob Ziroll (06:41):
Sure. Yeah. As AI becomes more evolved, it's all probably going to essentially be agents. I mean, it's not just going to be a serve and response sort of communication with an AI. It's going to be the AI then goes off and does its thing and then comes back to you, and maybe the thing that it does is purchasing plane tickets for you or having real-world ramifications to the prompts that you're asking it.

Alex Booker (07:07):
When I think about an agent, I think of a chat agent for example, and then I start thinking about ChatGPT and how it kind of is like an agent maybe that you can ask questions to and get responses from, but I sense that's not quite what you're talking about.

Bob Ziroll (07:23):
Yeah, it has the ability to perceive its environment now, especially if you're, I think it might just be for the pro members on OpenAI.

Alex Booker (07:31):
Oh, but that's a key characteristic of an agent is that it can perceive its environment, whereas at least in the early versions of ChatGPT, it didn't even know what time it was potentially, let alone what the weather was like or anything physical like that.

Bob Ziroll (07:46):
Exactly. Yeah. So it knew its training data cutoff, and that was I think famously known to be quite a while ago, so you couldn't ask what happened in the news yesterday or what will the weather be like today because it had no perception of its environment. It's getting closer to that now where it has some functions enabled on it where you can say, tell me the weather today and it will spin up a little loader that says doing some research with Bing, and then it reads through websites and then it comes back to you with answers. So I personally have been turning to ChatGPT as a sort of semi-agent to do most of my research because it not only can look at the same sites that I would've looked at, but it formulates a better Google question. It reads it immediately and it summarizes it for me in a way that makes sense to me.

Alex Booker (08:35):
So if I'm understanding, well, it sounds like an agent can connect with the outside world in some way, and that could mean that it has access to data from the outside world. And by outside world, I mean outside the constraints of the model when it was originally trained, so it could go out and get information about the weather or recent affairs as you described it. So I think that's a good example. I suppose then you're still looking at a text output. It can't affect the outside world in some way. It's still producing a text output that you can then read or maybe do some processing on if you're using code.

Bob Ziroll (09:09):
Yeah, that's exactly right. It's able to research and give you a result. It can do things like using DALLยทE to generate images for you, but even that's not really affecting the outside world per se. It's not going to place an order at a hardware store for you or purchase your next flights to the trip that it just planned. It's able to perceive its environment but not really interact with it.

Alex Booker (09:33):
That's a relief because when I was joking about the Terminator, I wasn't really joking.

Bob Ziroll (09:37):
It's listening, man. It knows. I think there are legitimate people that are legitimately thinking about how to progress AI without a Terminator scenario.

Alex Booker (09:48):
Scary, because as long as there's someone thinking about that, you might wonder if there's someone thinking about how to use it for malice or negative things.

Bob Ziroll (09:57):
Yeah. There's a funny AI agent that somebody built whose sole purpose is essentially to try and assure human destruction. So I mean it just runs in the terminal and it can't interact with anything in its world, but it makes plans. You get to watch it making plans on how to disrupt the world economy and ensure the destruction of the human race, and obviously it's just an exercise in frivolity, but it's funny to watch because sometimes the plans it makes are very silly. They're not impactful at all.

Alex Booker (10:29):
This is a conversation about agents and I'm excited to get into the specifics of that in a minute, but this is a fascinating topic as well because when I spoke with Tom about foundation models, I asked him like, "Hey, how do these things work? I mean, what's the high level overview here?" And he did talk about things like a neural nets and we get some parts of it, but basically he said even the AI researchers behind these technologies, they don't truly understand how it works.

And then I was listening to a podcast with Jeff Bezos and Lex Friedman last week, and Jeff Bezos put it in a way that made a lot of sense, which is that we didn't invent AI, we discovered it. And the reason why we're discovering it rather than creating it is because it keeps surprising us with what it can do. Even the people behind those foundation models, they're surprised like, "Oh, this works." I didn't predict or pre-program or expect that. And that's exciting in a lot of ways, but there is obviously that underlying question of if it follows that kind of unpredictability as it becomes more powerful and connected to the outside world, there could obviously be repercussions of that.

Bob Ziroll (11:32):
Yeah. I mean I to a much smaller degree have discovered that as well as I was recording my course and writing a really small agent from scratch, asking it certain questions and being surprised that it was able to nail it. I can't imagine what AI engineers are feeling or the machine learning engineers are feeling when they train a new model and then it's significantly more capable than they anticipated.

Jan Arsenovic (11:56):
Coming up, the future of AI agents.

Alex Booker (11:59):
You're describing a travel agent.

Bob Ziroll (12:01):

Alex Booker (12:02):
That's so perfect.

Jan Arsenovic (12:03):
But before that, I want to ask you if you enjoy our show to share it with someone. Word of mouth is the best way to support a podcast that you like. If you recommend us to somebody you know, you can make sure we can keep on making the show, you can do it in-person or on your favorite Discord server or on socials. And if you're Twitter or LinkedIn posts contain the words Scrimba and podcast, you might also get a shout-out right here on the show. Thank you in advance. And now we're back to the interview with Bob Ziroll.

Alex Booker (12:35):
Damn it. I'm not normally the kind of person who starts talking about AI dystopian futures, although there's always one of them at a party these days, I've noticed. The thing I want to do is bring it back to us as application developers and the APIs we have, including the ones related to building agents. So we've talked about what they are and we got a bit excited about some of the long-term prospects, but if we bring the conversation to today, what are some of the things that people are using agents for when they're building applications? What are some of the prominent examples or use cases of agents we see today?

Bob Ziroll (13:10):
The primary thing that, it's sort of like the to-do list for AI world. When I say that as a front-end engineer, you build a to-do list anytime you want to try out a new technology. With AI, it's a chatbot. You build a chatbot, that chatbot has possibly some interaction with your data, like Guil talked about with embeddings. It's trained so to speak on specific answers that you want it to give. And with agents, I don't know that we have really seen that fully realized in a chatbot yet, but I personally had an experience with an Amazon chatbot probably six months ago. I was trying to return something that was past its return date and I was having this back and forth with an AI. It was clearly an AI because it was answering immediately, but its questions weren't just like, okay, click one of these four bubbles that are your only answers.

And then it was asking questions like, well, why didn't you return it on time or is there anything wrong with it? And I was essentially able to convince it to allow me to return it, and then it processed the return label, it sent it to my email and it did interact with the real world. By sending an email alone seems inconsequential, but at no point as far as I understand, did some tech worker or a service worker have to interact with me at all. The AI did everything. It made sure that I was not trying to cheat the system and then processed a return for me. In the terms of a chatbot, that's a pretty, I think cool example of I guess what you could call an agent working.

Alex Booker (14:40):
I guess the part that makes it an agent is that it is connected to the system in some way. It can look up your orders, number one, that's context about the present day, and then it can also, if you think about a walking talking robot or even an autonomous car, there's just so many millions of possible combinations of what could happen. But I guess when you talk about printing a label, it's already a very robotic process because of the scale they operate at. This seems like a very early opportunity to connect the chatbot to that robotic system in order to, yes, eliminate the need for a support agent to manually do that. But as a customer, it's maybe a better experience as well because you get those responses instantly. And in your case maybe you just forgot to return it on time and you tricked the system. I'm not sure. But it was overall a good experience it sounds like.

Bob Ziroll (15:25):
Yeah, it really was. It was surprising when it was happening, and afterward, I mean I'm still thinking about it. It was like a dumb chat interaction with an amorphous company, and here I am still kind of thinking about how cool of an experience it was. So agents are I think going to be replacing, I don't want to say replacing people. I think it's going to be a really long time before people in their jobs are completely replaced, but they will be aiding workers as if they were helpers that can accomplish real world tasks. And so when we talk about what are some use cases for agents, I think it's kind of asking what are some use cases for hiring more people? What are some use cases for humans? They're going to be able to do, hopefully within boundaries, but they're going to be able to do the same kinds of things that hiring additional workers could do of course at a fraction of the costs and without some of the human rights violations that sometimes happen when you get into those scenarios.

Alex Booker (16:22):
And obviously I gave the example of a conveyor belt and robotics and stuff like that, which you could say is quite a predictable task, but what is to say they couldn't also participate in things like designing a website or coding a website?

Bob Ziroll (16:36):
Or coming up with the marketing strategy knowing what's going to work best in A-B testing before it ever gets A-B tested. There's a lot of things that it's access to data and its ability to reason will either aid or potentially replace jobs, and I don't want that to be a scary thing. I think jobs have always been replaced by technology and continue to evolve with new realms of work. So for a podcast that is focusing on helping students get into front-end development, I'm not saying that it's something that we don't need to worry about anymore. It's going to be a very long time, I think before anything like that to worry about.

Alex Booker (17:11):
I like your phrasing that technology doesn't eliminate jobs, it replaces jobs, and it actually can change jobs as well. There's a lot of things we as friends and developers spend a lot of time on that even after doing it ourselves we're like, "Oh, that wasn't satisfying." Or like, "Oh, I'm so annoyed at myself. That bug that I spend three hours on, I could have solved it in three minutes. That's not work we want to be doing necessarily." So already these types of tools might help with that part. The future is yet to be seen I think in the other ways that it evolves front-end development.

Bob Ziroll (17:41):
Yeah. For sure. I've got a relative who kind of a distant relative who is working at Microsoft and I think he's pretty involved on the machine learning side with open AI's partnership with Microsoft, and he has mentioned that he doesn't think it's something to be concerned about as far as everybody losing their jobs. So I'm feeling bolstered by that fact. Someone on the inside is bullish, I guess, on the job market as opposed to bearish and worrying about everything going down the tubes.

Alex Booker (18:12):
I would say that part of this series and the AI path that we released at Scrimba is to show people how to harness this power instead of fearing it. It is very easy to get caught up in narrative where you think my job is to write code, write documentation, write communication about code, and what these tools do really well, these LLMs is they write really well and they reason really well. But I'm assured that when you peel it back a layer and you start thinking through what is necessary to build successful software and how successful software is built, there's a lot of empathy required for your end user. There's a lot of collaboration involved with the people that write the code and ultimately you need somebody in that position to assemble the code and the components and actually building the right thing, not just building it well is a big part of the problem in modern tech companies as well.

And I concede that agents being able to connect to the outside world in some way that does perhaps give them better data to inform what might be the right thing to build or as you'd say, skip the A-B test completely. But I do think successful businesses are built on empathy a lot of the time, which I've never seen any evidence that AI has. It can maybe emulate it in a way, but at the end of the day, AI can't smile. It can't tell a story if it resonates and all these things, it can mimic those things. So at a high level, that gives me some reassurance in the first case, but obviously what we're talking about here is actually using these AI-based tools, not just to generate and write code, but to add capabilities to our own applications through these APIs. That to me is very exciting because we need developers who are able to harness that.

And I do think that a lot of developers, I know we go on Twitter and we see all this discussion around the latest bleeding edge type stuff, but social media and even podcasts are like 0.001% of the developer community and professional developers and fairly assured that the majority of developers who might have a sense of fear about this AI stuff and they're ignoring it, basically putting their head in the sands a little bit, they're not learning how to harness this technology. So those that do I think have an advantage in that sense as well.

Bob Ziroll (20:18):
Yeah, absolutely. And we talk about empathy and I think that is a quality that people are looking for in businesses. And while AI can mimic empathy, it's always beneficial I think for a business to have a human-like I guess brand or a human-like representation. When you're interacting primarily online, it can be so helpful to have an AI that can mimic empathy for your brand. I mean this could easily get back into the dystopian or utopian discussion, but the point is you can, as an engineer at a company, you can use these AI tools to continue that brand image forward by essentially mimicking the empathy. I guess I'm thinking in terms of a chatbot, but having quick interactions that are also pleasant I think are still helpful for companies to have.

And so as an AI engineer, being able to use the AI tools at your disposal to continue your brand's mission, which likely is to have good interactions and to have empathy with your customers, I think can be a great thing. And I don't think it has to be either this robotic careless AI or a caring empathetic human because we also know many humans who are not great examples of empathy.

Alex Booker (21:36):
You got me thinking about the support side of things because just a couple of recent examples that come to mind, one time I ordered food to the apartment, but I put the wrong address in and I didn't realize until halfway there and I had to get in touch with supports. And I could see the cogs turning behind the scenes. They have a support agent ready to respond. They get in touch with the driver, they try and update the system, they let the driver know, and it's a bit of a delicate situation like, "Oh, is the driver already gone too far in the wrong direction or can they come back?" And there's an element of judgment there that I don't know if an agent could get all the necessary context and then make a judgment call based on that. I'm sure it's technically possible, don't get me wrong, but we see how companies adopt technologies.

There are still computers out there running Windows XP for example, and we see how even with all the best APIs and technologies, a lot of the time it's getting people to become part of that process that has a barrier as well because of bureaucracy and stuff like that and resistance. It's another example I guess it's like I use this app called Laundryheap, it's like a dry cleaning service, and I wanted to know, can you dry-clean this thing that isn't an option in your app, can use a dressing gown or something, and they didn't get backed to me for a day and a half because they had to go to the launderette, phone them figure it out, get their quote. And you can totally imagine in a launderette like that, they're so busy, they've got so many orders, they're not checking the computer all the time, and you see what I mean?

I think there's this human aspect to it that you can't just plug AI into and fix it. If you use Google Maps 10 years ago or TomTom something in your car, it would genuinely try and drive you into a lake, and for many years it wasn't totally dependable, but then after 10 years of little refinement, it's in a place where you can almost always depend on it. I guess what I'm saying is that that last 5%, that last mile, if you like, took a decade to arrive at. So sometimes that polishing aspect that takes a long time, so it could be a while before we have to start worrying about these things.

Bob Ziroll (23:28):
I generally agree with you. I am also finding that things move much quicker than I ever thought they would. And so it's hard to say anything definitive and know that what you said will age well. It's like for all I know, OpenAI is going to release GPT 4.5 or five, and now it's capable of doing everything you just said.

Alex Booker (23:49):
Oh my God, am I permanently going on record saying all this rubbish?

Bob Ziroll (23:52):
No, I think it's generally known that any predictions you make about AI are temporary and short-lived.

Alex Booker (23:58):
Good point. And besides, just to be clear, I don't mean any of this is advice. The advice I would share is to learn about these technologies. That's the number one thing you can do to learn how to wield them and stay ahead of the curve or adapt. I think the knowledge is key in that case. And on that topic, I know that you've made a course about or module about agents. I think you've given a fantastic definition about agents and I'm really starting to get a picture of how they could play a role in the real world. Let's shift the conversation a little bit to the technical side. Say I want to build my own agent, how would I go about doing something like that with my own code?

Bob Ziroll (24:34):
That's a great question and it actually touches on something we haven't quite yet talked about, which is an agent's ability to create its own internal plan. When you're creating your agent, you can prompt it in a way that forces it to come up with a series of steps that it needs to take before solving your problem. With text generation, it just kind of figures out what the next most logical word is. As they say, it's like a autocomplete on steroids. With an agent, you can teach it to reason about its task. It'll come up step-by-step with a plan, and then it will start interacting with or perceiving from its environment on each one of those steps. Sometimes creating additional steps as it learns more information similar to a person. So when you're doing research, you might start going down rabbit holes of other research you need to do in order to fully understand or fully solve whatever task it is you're trying to do. And so an agent is why we start getting closer to talking about artificial general intelligence.

Alex Booker (25:35):
I hear what you're saying, it can make a plan. I don't really understand what that looks like in practice.

Bob Ziroll (25:39):
Let's say the agent's task is to plan my next trip to Hawaii. As a text generation model, it could just spit out some general stuff, these are the things that you should do. But as an agent, what it could do is start making plans to actually interact with the real world. I don't think anything is capable of doing this yet. This is all theoretical, but let's imagine that I can say I want to plan my next trip to Hawaii. It could start asking me some questions like, do you want it to be a relaxing trip or an adventurous trip or a mixture of the two? And I can respond in natural language telling it exactly what I want.

And at a certain point it's going to start making a plan that says, come up with a list of activities for each day of the trip. Purchase the tickets at a reasonable hour within a reasonable price. Start booking reservations at the hotel, start booking show reservations, getting my seats to a movie or a show or whatever it might be. All the activities that I'm doing there, swimming with dolphins, going on a boat trip, taking a helicopter ride. It can start not only giving you ideas but booking your travel, booking your tickets, doing everything for you, coming up with an itinerary, and then all you have to do is do what it told you to do.

Alex Booker (26:50):
You're describing a travel agent.

Bob Ziroll (26:51):

Alex Booker (26:52):
That's so perfect.

Bob Ziroll (26:53):
It's funny you say that because one of the solo projects.

Alex Booker (26:55):
So if you went into ChatGPT in its current form without any of the external stuff, right, so 3.5, first of all, if you just ask it, give me a plan for Hawaii, it wouldn't know enough to give you good answers, so it would say, "Oh, if you're going in the summer, you might consider ABC. If you're going in the cooler period, you might do..." Gives you a very generic answer. There is an onus on you to be more specific there. I'm not sure if that thing you described where it can kind of, because you could say part of the plan could be to clarify a few things first. I don't know if that's part of the plan you're describing or if the plan is truly the itinerary plus the actions to make that itinerary happen as you describe it.

Bob Ziroll (27:33):
100%. I think that's certainly something you'd have to teach your agent. As the developer creating the agent, you'd have to teach it to do that, but it would only improve the experience once it has additional information. It's able to see those edge cases where you may not have been specific enough, gain extra information from you perceiving from its environment, which is us in this case, and then continuing on with its plan.

Alex Booker (27:59):
Yeah, that's so cool because as coders we write scripts to do things and our plans are scripted. And so if you were writing a script to help someone go to Hawaii, you'd have to pre-imagine every scenario. The instructions would have to be quite specific, but I guess if you're coding an agent or building an agent, you can give a more general instruction, consider all the things that the user hasn't given you that might be helpful towards creating a comprehensive plan to go to Hawaii or something a bit more general like that I think, and then it can kind of use that to ask questions in a loop it sounds like as part of the plan.

So if you were to write the plan on paper, it could be, "Okay, the first step in my plan is to ask all the clarifying questions. They could look like this, but feel free to do your own thinking as well." Maybe the second step in the plan is to then produce an itinerary. The third step in the plan is to go through each step in the itinerary and start to reason about how can we make these as easy as possible for the person going on holiday to just show up and do it. So there's a sub plan, maybe like book the tickets, add the dates to the calendar, set up a reminder, I don't know, book a taxi from here to there. I wonder if I'm explaining an understanding as well.

Bob Ziroll (29:09):
Yeah, that's exactly right. And I don't know that there's any reason why someone building an agent like this couldn't make it real time as well. So let's say you're on your trip and there's unexpected rain, the agent could be notifying you saying like, "Here's a bunch of backup plans I made for you," or "I've canceled those tickets for this outdoor event." The possibilities are endless.

Alex Booker (29:28):
You said it's like no one's done this and it's theoretical, but it does sound like it could be technically possible as well if someone dedicated themselves to a problem like this.

Bob Ziroll (29:37):
Certainly, yeah. As you go through the section that I recorded at Scrimba, you can start to see how creating the tools that interact with the external worlds is no different than something that we learn in the front end developer career path. Once you know how to interact with APIs, you can start to create things like this. Now, I don't think the APIs exist in a meaningful way for anybody to go book flights, for example. You kind of have to be a big company to do something like that, but it's not out of the question to see that it's not in the distant future. That's a lot of negatives. It's coming soon.

Alex Booker (30:11):
Yeah. But this is a sneak peek into the future maybe, because if British Airways, for example, or a website like Skyscanner, that aggregates flights from a bunch of different places, but they partner with the airlines to get that data, I think. I don't think it's a public API necessarily. But if there's a commercial angle there, you can totally imagine all these companies collaborating to somehow make their data accessible to an agent like this because there is a benefit to the end user. They might sell them more flights, and that's just one example, right? There are probably thousands of data sources that could be exposed in a way that would make these agents possible because interconnecting the world via APIs is a path we're on already. This could just be an impetus that makes it even more important to offer the necessary data or endpoints to do things like book flights or figure out the best time to fly and things like that.

Bob Ziroll (30:57):
Yeah. I think we are very far down the road of connecting the world with data. There are situations like with flight information and such that I think start to pose security concerns and it's always a concern to allow purchasing of tickets. You don't want some agent that's written poorly to start buying up tickets on somebody's credit card.

Alex Booker (31:17):
Yeah, good point.

Bob Ziroll (31:18):
There's some regulations and some protections that need to happen around that, but just because hard work to do doesn't mean there's not going to be people doing it.

Alex Booker (31:26):
And we're dreaming big here, but today there are APIs available to make stuff like this happen. We're just talking really about booking flights and something really complex, but there are lots of examples like you mentioned, Bob, to do with the chatbots, for instance, that are in reach today. Talk to me a little bit about React, and I'm curious how you're going to interpret that question.

Bob Ziroll (31:45):
When I first decided to take on the agent section, one of the first things that I learned is that there's a prompting strategy called React, and it has absolutely nothing to do with the front-end web framework React or library React, so it seemed very fitting, maybe a bit of karma just sticking me back into the React world. In terms of AI, React is a prompting strategy that is short for reasoning plus acting. We talk about prompt engineering, pre-determining how your AI should respond to you or your model, your AI model should respond to you, and React is just a way to tell it, "Hey, I want you to think about this problem and then figure out what task needs to happen next, and then I will give you the power to perform that task and you'll be called again with the result of performing that task."

A really simple example is we talked about how an AI model doesn't know what the current weather is. I can say, tell me what the current weather is, and it will reason obviously a little bit elementary, I need to figure out what the weather is, but that would be its reasoning step. Then it would know that it has the ability to look up the weather because of a function that we have provided it to get the current weather. Then it will call that function and the results of that API call will then be reinserted into the history, the chat history with this AI, and it will awaken with the new knowledge that it just gained from calling that API, if that makes any sense. And now in its chat history, it has the results of what the current weather is and it can continue to solve the task. In this case, the task is simply to convey what the current weather is, so it would just respond with the current weather.

Alex Booker (33:22):
I will say for the benefit of people listening, the best way to truly let this sink into place is to see it illustrated, I think, because you can see the fort output from the model, what its reasoning is, and then you can see the action it arrived at as a result of that reasoning. And that action is usually something like, my action is to core this function, get weather in this case, or something like that. But I suppose a key point here is that it's not just a one and done, there's like a loop here where it can keep going and build more comprehensive answers.

Bob Ziroll (33:57):
Yeah. It determines whether it has completed the ultimate task.

Alex Booker (34:01):
The ultimate task.

Bob Ziroll (34:03):
The ultimate task. If not, it comes up with more plans. There's a great site called AgentGPT. It's neat because you can tell it to complete some arbitrary task for you and it will output its thought process, just like what you're talking about, in real time. It'll tell you like, okay, I need to do these five tasks, and it'll start completing task one, and then it will revise its five tasks. It starts to get off the rails. I don't think a great way to actually plan your trips or whatever, but in terms of the education of understanding what an agent is doing, it's pretty neat to watch it accomplish its tasks and come up with new tasks, and it really does feel like you've hired an agent to do some work for you.

Alex Booker (34:46):
Very cool. We're almost out of time, I think, but I would be remiss if I didn't ask you a little bit about your perspective as a teacher and one of the Internet's most famous React teachers about how you would recommend students like me for that matter, stay up-to-date with these things. Because I've kind of touched on a few times during this interview that a lot of developers aren't really paying attention to this. They're either a bit scared of it and they'd rather not think about it.

Well, frankly, understandably, they're just waiting for the dust to settle. I don't think it will settle, by the way, but in any case, for those of us who want to keep up with this stuff, they can still feel quite overwhelming with, I check out some AI podcasts, some AI YouTube channels. I read posts and newsletters, and I just can't keep up. I physically can't keep up with some of this stuff going on alongside my priorities. And for someone listening, their priority might be learning front-end developments and getting a job as a front-end developer. I do think it's kind of important to recognize that that is a good priority to have, and this shouldn't overtake everything. But for people like those and like me who are just keeping on top of it on the side, yeah, how do you recommend sort of staying up-to-date and advancing our learning without getting totally overwhelmed?

Bob Ziroll (35:58):
The first piece of advice I would have is to be prepared that things are always going to change. You are likely not going to settle into one expertise when it comes to AI and then expect to coast on that for a really long time. I think to be a successful AI engineer, you'll need to always be up-to-date with the new stuff and be ready to just drop what you were doing and use the new thing. That sounds pretty counterintuitive when we talk about front-end web development because there's always new things and they oftentimes disappear.

Alex Booker (36:27):
Kind of, but it feels slow compared to AI friendly enough.

Bob Ziroll (36:30):
Yeah. These days I think it's a bit more stable. AI may settle into something stable in the future, but for the next, let's say decade at least, things are going to be moving very rapidly. So I guess the advice I would have is exactly what you mentioned, being in some email newsletters, subscribing to YouTube channels, being involved with subreddits that are AI specific, or at least to the realm of AI that you're interested in, and then just being willing to spend your time testing new things and try to keep building. The more you build, the better you get it. Anything, which is what I always say, you just need to practice to get good at something. And when the field is moving so quickly, you're just going to have to double down on that, and you're just going to be practicing new things and trying out new things all the time.

Alex Booker (37:17):
That's awesome. That's really good advice. Bob Ziroll, thank you so much for joining me on the Scrimba Podcast. It's been a pleasure.

Bob Ziroll (37:22):
Yeah, it has. Thanks for having me.

Jan Arsenovic (37:24):
That was the Scrimba Podcast, Episode 143. If you made it this far, please subscribe. You can find the show wherever you get your podcasts. So Spotify, Apple Podcasts, Overcast, Google Podcasts, Castbox, Amazon Music, Deezer. I mean, if you can name it, we're probably there. The show is hosted by Alex Booker. You can find his Twitter handle in the show notes. I've been Jan the producer. Keep coding and see you next time.

Who's Afraid of AI Agents? The Future of Automation, with Bob Ziroll
Broadcast by