OpenAI for Developers: How to Use AI for Coding, with Tom Chant

Tom Chant (00:00):
There's been a lot of bad AI so far that most of us have not had to interact with, what we're actually seeing now is the fruits of that. I believe that the next decade or so we are going to be tweaking the large language model idea and we're going to be getting better and better results, but we're going to be faced with the same problems we've got now.

Alex Booker (00:19):
Hello, and welcome to the Scrimba Podcast. On today's episode, I'm speaking with Tom Chant about what AI means for new developers. Is it still worth learning to code? Will robots take our jobs? Spoiler alert, no, if you're learning to code, it's still a great investment. However, Tom argues AI will augment developers and it will have an influence on the industry and the problems you solve and how you solve them, he wants you to get ahead. So Tom is joining me today to recap how developers are using AI for things like code generation, debugging, and believe it or not, pair programming, also prompt engineering. In other words, how can you get the most out of these tools with carefully crafted prompts and by configuring options you may not yet be familiar with, like temperature?

(01:11):
Using AI to generate code is just one side of the discussion. Did you know ChatGPT and GitHub Copilots are both built on APIs by a company called OpenAI? Those same APIs are available to you, meaning you can use them in your app to build powerful AI features or re-skin ChatGPT in your own app and even fine tune the model to incorporate knowledge unique to your business or project in the answers. As the teacher behind Scrimba's brand new course, build AI apps with ChatGPT, DALL-E, and GPT 4, which is also linked in the description, Tom is in a unique position to tell us all about it. I'm your host, Alex Booker, you are listening to the Scrimba Podcast, let's get into it. To set the stage for our conversation around AI and coding, can you tell us what tools people are using to leverage AI in their coding today?

Tom Chant (02:10):
It depends what they want to do, but the hype at the moment is all around the OpenAI APIs and that can include the ChatGPT models if you're looking at basically language generation, but of course, there's also the codex if you're looking at code generation and DALL-E for image generation. But then there's a whole load of other stuff out there, there's mid-journey, there's all sorts of things going on and obviously Bing and whatnot are getting in the game as well. But in terms of what developers are using at the moment, it feels like the zeitgeist is very much with OpenAI right now.

Alex Booker (02:44):
What about GitHub Copilot, that's another one I've seen people using, is that OpenAI as well?

Tom Chant (02:49):
Yeah, that came onto the scene obviously some time ago now. Actually, I couldn't give you a precise date on that, but it feels like it's been around a while.

Alex Booker (02:57):
Yeah, it was like late 2021, I think.

Tom Chant (02:59):
And it caused quite a lot of commotion, didn't it? When that came out, it was certainly quite a big thing and the same questions arose like, does this replace developers? It's a really interesting tool actually, and it's much more complicated in a way than using ChatGPT to generate code. I think Copilot is very much a tool for developers, in no way, shape, or form could it ever conceivably replace a developer. I might live to regret saying that, but I think that's the way it looks at the moment, it's very much a developer's tool as opposed to ChatGPT, which is a much more of a code generation tool potentially.

Alex Booker (03:38):
That's the thing when talking about AI and speculating about the future on a recording or even a LinkedIn post. In my case from time to time, I often wonder if I'm going to live to regret it. We're not speculating too much here I feel like, but if I understood you right, with ChatGPT, you can open a prompt and say, write me this function, and you can configure it to take certain arguments. A lot of the time, what you're going to get back is something equivalent to what you might have found on Stack Overflow had you taken a step to Google it, except maybe the function in this case is slightly customized to your prompt and your or use case, which is obviously a huge productivity win.

(04:14):
Copilot does a similar thing except it typically lives inside your code editor and one way I've seen it described is auto-completion on steroids. Is that a fair comparison between the two, do you think? I suppose the quality of what the output should be similar if they're fundamentally based on OpenAI, although I can imagine they might be trained with different models, for example.

Tom Chant (04:34):
Yeah, I actually feel like the output you get from Copilot is likely to be superior, I can't make any promises on that. But for the fact that the Copilot is asking more from you, it's really making a prediction based on code you've already started writing, not just a description of code. As you say, it is more than just prediction, it is on steroids as you put it, and there are other things you can do with it and have it create code from comments, et cetera. But ultimately, I think you are giving Copilot more and it's giving you more.

Alex Booker (05:06):
You're giving it more in the sense that maybe you're not writing a prompt, you're not giving it more in that way, but it has your whole code base available for contacts to go on and suggest something.

Tom Chant (05:15):
Exactly. It's got your code base and it's likely got the beginning of the code that you wanted to write. It is much more like predictive text, in that sense, it's starting off halfway through what you're doing or not halfway through, but at least you've done something and it's making a prediction based on that. As you say, it's already got your entire code base there. And that is very similar to ChatGPT, when you give it more, it gives you more. The advice when using ChatGPT, whatever you are using it for or whatever you are doing with OpenAI basically, is just to keep being more and more specific and give it more of what it wants, and that's what Copilot is taking from you automatically really.

Alex Booker (05:50):
Have you noticed that the quality of your prompts dramatically impact the quality of the code you get back?

Tom Chant (05:57):
Absolutely. Good prompt design, prompt engineering is everything.

Alex Booker (06:01):
Prompt engineering, what is that? Because it sounds so sophisticated and I don't know the difference between what I'm doing, which is typing words into ChatGPT and pro engineering a prompt.

Tom Chant (06:14):
I think that's a really good point, and I think there's a lot of language actually coming out of open AI and the whole AI phenomena, which is really quite usefully used. Prompt engineering is essentially prompt writing, however, prompts can be quite complicated or they can really be very simple. If you just open up ChatGPT and say, write me a function that squares two numbers, that's a prompt and essentially it's a prompt that you've engineered, it's not a very complicated one. If you are actually creating an application using the OpenAI API, then you are likely to get into the field of more complicated prompts, so you might be including some examples in the prompt, you might have some formatting of that prompt, you might be giving it some separators to separate out the examples from the overall query or question and things like that.

(07:08):
When you get more into that intricate detail of the prompt with examples, with separators, possibly other requests going on as well, you might be, if you're generating human language, thinking about the length of the completion, thinking about the tone, whether it's academic or comic or what have you, so there's all sorts of things that you can do with prompts. And I suppose the term prompt engineering describes more the situation where you are trying to write more complex prompts to get more specifically what you want and basically using all of the tools available to you.

Alex Booker (07:42):
There are some specific phrases that these models respond better to, in the case of natural language, it's just a bit easier to talk about. You might write, make this funny, or you could say, make this in the tone of a comedian or make this in the tone of a specific comedian. There's probably five or six different obvious ways to express the same thing, maybe a prompt engineer understands a little bit what is going to get the best results. Plus, I bet there has to be a way to kind of express more, if you've got a very simple prompt, like sum two numbers, very simple, there's not really many options there.

(08:17):
But if you start getting into the territory of writing full programs or full articles or anything like that, well, now you need to think about, do I ask it to do it step by step or do I use subsections or lists or whatever? I'm sure there is a specific way to get the most out of these tools, and that's probably what prompt engineering is describing, still it's pretty bloody impressive what you can do with just hammering stuff into the inputs.

Tom Chant (08:40):
Yeah, I think that that's absolutely true and how you approach a problem, as you say, are you going to break it down or are you going to ask for it all in one go? Things like that are very important and very relevant. Call me a cynic but I can't help thinking that prompt engineers at the moment are doing quite a lot of trial and error and quite a lot of hit and hope.

(09:01):
In my experience, finding the, it's not just the sense that the same prompt doesn't get you the same completion each time, it doesn't get you the same output, but also over the course of some weeks or months, things can actually change quite radically, and I'm sure there is evolution going on, although they're no longer training the models. I suspect there are tweaks going on, which does mean that is very hard to give absolute rules and advice for things, which will always work in terms of prompt engineering because they really can change quite a lot.

Alex Booker (09:36):
I suppose the thing about ChatGPT and Copilot for that matter is that, the input is fairly straightforward, it's like a tax box essentially, or it's the tap key in your editor with Copilot. But since these tools use OpenAI under the hood, and you yourself I know have been not only learning about OpenAI and using it, but you've recently created a course showing developers how they can utilize it. I suppose in theory, you could build your own skin for ChatGPT essentially by using the OpenAI API, just showing the power of something like that.

(10:08):
All of this to say, I know that these tools we use and know are quite one dimensional, you put one, inputs in. But as I understand it under the hood, there's quite a few different things you can configure and I just wonder if you could tell us about those?

Tom Chant (10:20):
There are quite a lot of possibilities. Temperature is one of the most fundamental really, and what temperature does, is it determines how daring the model is going to be essentially. If you have temperature set quite low, it's not going to be very daring and that's great if you are asking quite factual things. So you are just asking maybe to name you some famous people from history that did X, Y or Z or you want some geographical information or something where there's a clear right, wrong answer, you want to have the temperature pretty low.

(10:53):
The more you're being creative, then the more that you want the model actually to think outside the box as it were and start giving you slightly wackier ideas. So then you set the temperature pretty high and it starts to be more creative and a bit more off the wall, and that's absolutely great if you are generating stories. And you might want to do that also if you are writing articles but these are not dry journal articles for academia, but something much wittier and more amusing. Then also, you might just be wanting to get that higher temperature and therefore slightly more, I don't know quite what to call it, not exactly out there, it's still very readable English, but just slightly more daring.

Alex Booker (11:37):
More novel maybe. What are the other variables you can configure?

Tom Chant (11:41):
There's max tokens, there's top P, there's presence penalty, frequency penalty, and stop sequence. Those are the key ones, I'm not promising that there aren't some more obscure ones.

Alex Booker (11:54):
My goodness, there's a lot to unpack there. Because we were just talking about how you never really get the same output twice and with prompt engineering I guess it's never truly deterministic, but then you throw all these variables into the mix. It just got me wondering, is there a benefit for us as developers to get access to those variables or should we just use what ChatGPT and Copilot give us, and it's meant to be a good balance.

Tom Chant (12:18):
I think for Copilot, when you are generating code, obviously you are not really wanting it to go and be creative, you want it to give you something that works. So I think whatever Copilot is doing under the hood, they will have thought about in a lot of detail and tried multiple configurations before coming up with what they're using.

Alex Booker (12:37):
You might want whimsical writing, but you probably don't want a whimsical novel function that spins in a loop before doing what you want.

Tom Chant (12:45):
Yes. When you are working with the OpenAI API in more creative ways, then it's great to have all of those at your disposal. Can be frustrating to use them because you never get the same results twice and you never quite know exactly what they're doing because you don't know what you would've got in a alternate reality where you didn't use that setting.

Alex Booker (13:03):
That's really good to know, I don't feel like I'm missing out any more then. We spoke a bit about the ways in which developers are using these tools, I mentioned, for example, you can have it generate a function, maybe even customizing it to some context like the argument names and you gave the example of writing a comment in Copilot describing what you want and pretty much the same thing, generating a function, what else are developers using these tools for?

Tom Chant (13:27):
The big one you haven't mentioned is debugging nothing like just pasting a whole load of code into ChatGPT and asking it to tell you why you're getting this error. I haven't found that to be 100% successful I have to say, but what I have found, is that it starts throwing out ideas and you start going through those ideas and eventually either you or it comes to the correct conclusion, so it's really useful in that way. But that's another good example of how it doesn't replace a developer because it's just not reliable enough to do that, but it does really help. So I think debugging is one big one, testing is another.

(14:02):
And I think it is nice the way you can ask it a question that you might also ask a colleague, particularly for junior developers, less experienced developers you might want to say, are there any security implications in this code here? You might want to say ... I think I speak for half the world when I say that there are elements of CSS which just remain mysterious. Apart from anything else, just being able to say that, here's a massive chunk of CSS, is there anything repetitive here? Have I selected multiple things? Have I got CSS, which is actually canceling itself out in some way? As well as, are there better ways of doing it? Could I group my selectors better? Could I make it more readable? Could I put it in a better order? Could I comment it?

(14:47):
Things like that for me are priceless because those jobs are a pain. They're things you make mistakes in but you don't really know you've made the mistakes because they're not critical errors, they're just sloppy and then they're there on the code base or in my case in the course that you are recording and then when you recognize later you kick yourself.

Alex Booker (15:05):
I was only thinking about generating code initially, but I love this idea for you can ask ChatGPT to point out errors in the code or potential errors or security problems, I also like the idea of almost using it like rubber duck debugging. There have been quite a few occasions where I've used ChatGPT as like a sidecar, I just have it in the right third or quarter even of my monitor and I'm just chatting away to it. I'm like, what's another word I could use for this variable? Or in the case of writing I'm like, could you rephrase this? Or just little things that frankly it's so specific I don't think I would've got good Google result for it. What kind of word is this, is a proverb or is it a saying or something? So it's exactly like you say, having that wiser colleague in your quarter of the screen.

(15:51):
But of course, I cannot think this when you said security, as a new developer especially and any developer for that matter, it's awfully bold to rely on ChatGPT to detect security problems with your code. Or it's bold to assume that the code it generates does not include security problems. We'll use security to highlight the seriousness of the issue, but they could be any kind of error, it could be that you don't have the right guard clause in place or you've gone off by one error or anything like that.

Tom Chant (16:19):
Well, it's really interesting, you just described it as having a wise colleague in the corner. I think you have to remember it's a colleague whose tone of voice is very confident. In a way, it reminds me of a TED talk because TED talks are full of great communicators who have got their little thing to say and you can find yourself just totally trusting them. But of course, it's not necessarily true just because somebody says it with confidence, it's not necessarily true and that's why it very much is only a tool.

(16:50):
And I think rubber ducking is the word really, it is more like how you can use it to stimulate your own thought processes and problem solving processes, not really completely offloading it onto ChatGPT, you can to an extent, and of course, the things it points out to you are worth checking up on. But you definitely can't 100% rely on that, you have to take it as a suggestion. I would always look at it as, why don't you look at this, rather than, do this.

Alex Booker (17:23):
That's such a great point. I'm sure we could pull up a few Ted talks in the last 10 years where they sound absolutely authoritative on something which was later disproved. I'm also thinking of Elizabeth Holmes and her biotech company and how it was just a big fraud, but because she presented it with such confidence, everybody took it as gospel. And the same is true for, what, AI generates, sometimes they hallucinates things, other times it's just wrong. And I noticed this because another thing we could add to the use cases for ChatGPT and any AI tool for that matter, prompt based is asking it just general questions, you might wonder what's the difference between service and events and web sockets for example? Why should you google it if you've already got this authoritative sounding yet not always correct expert in the corner of your screen?

(18:08):
And then I've noticed with these things, sometimes it just lists a bunch of stuff, it sounds very confident, you are so new to the subject, that's why you're searching it, so you really don't know if it's right or wrong. It's only when I prompted with things that I know about already, I just want a quick refresher or maybe I'm writing about it and I want ChatGPT to do the writing for me and get me started, I'm like, that's not right. But it's only with the benefit of hindsight and experience that you can make those decisions, which as a new developer, you don't necessarily have. Does that mean you should leave all these productivity gains on the table?

Tom Chant (18:41):
No, I don't think you should completely exclude them at all, I just think you need to know what they are and what they aren't. You have to remember that this is a large language model and what that means is, it is predicting the next chunk of language based on probability and it doesn't matter if that's human language or code language, and it does a pretty similar thing when dealing with images. What that means is that, because something is mathematically the most plausible, definitely doesn't mean that the thing is correct. As long as you bear that in mind and you know the limitations of it, then use it to your heart's content.

(19:19):
But it's just that you don't absolutely rely on it and you verify the crucial points and you still need to read around your topic from reliable sources, that's inescapable, ChatGPT does not replace the rest of the knowledge that's available on the internet. And with something like Stack Overflow and the way it works, you will be able to see from the up votes and the down votes and the comments, et cetera, how reliable that information is. And of course, you've also got a date on it so when that information was reliable, little things like that, of course, are completely missing from ChatGPT. And there's just something that you have to be aware of as a new developer, it's of help but it's not everything.

Alex Booker (19:59):
What about the ethical considerations here? Doing a job interview like a take on task as a developer, I honestly might for a second wonder if I should use ChatGPT for something like that or if that would be considered cheating or something?

Tom Chant (20:14):
I think that touches on one of the biggest points and it's a wide point really about the entirety of education in a way. The way that we've tended to look at things has been to judge people based on what they can produce under exam conditions or what they can produce in something like a take home task or coursework if you are at university or school. And that model is semi broken, and in a way, I'm hopeful that ChatGPT is going to finish it off because I don't think it really works. And what I think we want to be doing, is seeing what people can achieve with the tools available to them, and ChatGPT is a tool that's available to them and it's a legitimate tool. And so I think for doing a take home challenge, you use what is at your disposal and that, of course, includes whatever you can Google and whatever you can get from ChatGPT and whatever you can get from asking in the Scrimba Discord channel or phoning your friend who's a developer. There's nothing wrong with doing that, that is real world now.

(21:13):
If I was employing a developer or judging a student who's about to leave school and marking them, the only real way of understanding that person and their capabilities is to sit down with them and talk to them. And what I'm hoping for, is that we'll see now much more of that, it will be a question of, what can you achieve when you are working independently with all of the tools you've got at your disposal, and what are you like to talk to? How much of that knowledge is actually things that you have a deep understanding of?

(21:46):
And that's why I always would say to a student who's heading for a job interview, they're going to be really impressed by the person who can tell them what's going on under the hood. Being able to know what a piece of code does is great, but can you really get your head around why it's doing that, what's actually happening there and what the ramifications of that are? And that's the thing you only really get from somebody when you're having a conversation with them. So I would say use it, it's real world and you want to be productive, you make use of all of the tools you've got.

Alex Booker (22:18):
Yeah, why should you hinder yourself? And to be honest, it's not morally any different than copying a snippet from Stack Overflow. If I'm completely honest, I had a situation that comes to mind where I had to detect the difference in two images so that I would only transmit the change and reduce bandwidth on the network, and I don't know how to do that. So I started Googling and ended up finding a snippet on Stack Overflow and it did the job, if you asked me to explain it to you, I don't think I could, to be honest, and I'd have to navigate that if it was a job interview. But the point is, I still thought like a programmer, what are the potential inputs here? What could go wrong? Let me throw some task cases at it, I just did the job.

(22:59):
If I use a library to store something in a database using ACID or whatever, I don't know how that works either, but it just so happens I'm deferring to a library, I don't know how the system navigates the kernel and the operating system either. The whole point of software development, is that you're always predicating your work standing on the shoulders of giants. Snippets are like the fringe of that and what you're doing with ChatGPT essentially, is generating snippets, that's also what you do when you copy code from Stack Overflow or GitHub guests or even portions of open source libraries assuming the license is permitting.

(23:31):
I suppose what is a moral issue is if you are not forthright about it. In a take on interview, there's a trap people can sometimes fall into, I think, which is that they want to make a good impression, they have the best of intentions, so they stay up late, they wake up early, they spend a bit more time and effort on it. But then I think what can happen if the communication isn't open between you and the company, is that they might have only been expecting you to spend a few hours on it and therefore they get a false impression of your efficiency, which means when you do get the job, you can't keep up with the pace.

(24:02):
I've heard of that happening. I suspect if you use ChatGPT, it could be a similar story if that company isn't okay with you using ChatGPT. There is obviously a issue around copyrights and also security for that matter, I don't think companies who pay developers a lot of money want to end up on the front page of the New York Times with a massive vulnerability and when they do an investigation, it turns out that a junior dev generated the code or even a complacent senior dev generated the code of ChatGPT that caused the issue. Some companies have policies where you shouldn't use it because they don't quite yet understand the implications.

Tom Chant (24:35):
Absolutely. All I would add to that is that I think in the job interview scenario, the onus is on the employers to set the parameters very clearly. And I also think, if they only want you to do something for a couple of hours, they should arrange a time with you and say, we're sending you the challenge at this time and we want you to send it back to us at this time, rather than actually giving you the opportunity to spend a long weekend on it, which as you say, comes with all sorts of problems. And it is totally understandable that some companies do not want their developers to use ChatGPT, it comes with its risks and that New York Times headline is probably already out there being coded right now and it's only a matter of time before it's on the front page.

Alex Booker (25:16):
I like that.

Tom Chant (25:17):
And I expect, blame the AI will become a thing, like the dog ate my homework kind of thing is. But it would be AI.

Jan Arsenovic (25:25):
Coming up, why AI isn't going to replace developers just yet.

Tom Chant (25:30):
Pretend you know nothing, try and build an app using just ChatGPT, you are going to feel a bit more secure in your job choices.

Jan Arsenovic (25:38):
Hi, I'm Jan, also known as Jan, the producer, Tom and Alex will be right back. But first, I want to quickly take a look at some of your recent tweets and other social media posts about the podcast. Vanessa V shared the following post on LinkedIn, it makes no sense, but there's this idea of expanding your luck surface area, the more you put yourself out there, the greater chance of being lucky, quote by Alex from Scrimba. I love the latest Scrimba podcast episode which is titled, How to Get Your First Dev Job by Playing Call of Duty with Scrimba student, Sean, the story is interesting and amazing and I think you'll get value and comfort out of it. Thanks for sharing Vanessa.

(26:18):
And Diego Aguero shared our recent interview with Laura Thorson from GitHub saying, if you don't know what to do with your LinkedIn and portfolio because you don't have relevant coding experience, listen to this episode of the Scrimba Podcast. And over on Twitter, Carlos at Life Out Loud code shared our episode with Angie Jones saying, a part that I really resonate with from this podcast was when Angie explained what makes a good teacher, breaking things down to their most basic concepts and being able to relate them to something familiar, AKA, hanging it on a hook. There's comfort in knowing that you're not the only person out there that may look around and see others getting a topic easily while it may take you some time needing to see something in practice to be able to put it into contexts can greatly help you retain it.

(27:08):
Well, thanks everybody for sharing your takeaways from the show, it really means a lot. By the way, if you're feeling super supportive, you can also leave us a review on Apple Podcasts or whatever may be your podcast app of choice, but for now we're back to the interview with Tom.

Alex Booker (27:27):
I'm curious a bit later in the interview to get your thoughts on what this means for the development industry and new developers getting a job, is it still worth learning to code all that good stuff? I also am very keen to learn about your new course, and it's very fascinating actually, but we're using these graphical interfaces to interface with OpenAI, but we can't interface with OpenAI directly as well using their API, and I'm sure there's a lot we can get into there. But before we go there, I know you used to be an English teacher in a past life, well, I think you went from teaching English to teaching on Scrimba if I'm not mistaken?

Tom Chant (27:59):
With a break in the middle, I didn't switch directly, but, yes.

Alex Booker (28:03):
So, no?

Tom Chant (28:04):
Well, sort of because I was part-time English teacher and part-time freelance web dev for a while before the pandemic. And then in the pandemic, I was more web dev than teaching and then I was Scrimba, so it's complicated.

Alex Booker (28:19):
I just wanted to ask in your experience, because I totally agree with what you said about testing, I think it's so broken how you can cram before a test and get a good score, but like a sponge, you inevitably spill all that information, you don't retain it. I also think it rewards a certain type of students compared to other students who can actually be very impactful, successful, intelligent, productive, they just don't vibe with this format necessarily and maybe they benefit from practical assessment of that kind of thing. I have lots of thoughts and opinions about this, it's one reason I love Scrimba so much, it democratizes learning to code for everybody, and as we know in the coding world, it's one of the best because you don't need a degree necessarily, you can prove your merits however you can, whether that's open source contributions, gray interview assessment, whatever.

(29:05):
But I just wanted to dig into this quickly because when you said about you hoping that AI might be the nail in the coffin for that type of examination and what that means for students, I just wondered, obviously, you won't talk about individuals, but did you ever come across situations where really bright people did an exam very well, but they were great at other things and maybe you knew some students who had great task scores, but when you spoke to them you didn't quite see that in them when you had the conversation like you previously described?

Tom Chant (29:32):
Absolutely. In a way, I'm a prime example of that, fantastically intelligent, but I was never that good at exams, no, I'm joking. But I think a lot of people fall through the massive gaps in the education system for exactly that reason, they're just not cut out to absorb a load of information and then discharge it for a couple of hours and the rest just soaks away. And I think it also fits in a bit with what we were talking about earlier about sounding confident like ChatGPT sounds confident. I think our system favors people who can absorb information quickly to pass exams and people who can express themselves with a great deal of confidence.

(30:11):
And I think there are a lot of fantastic things in this world, of course there are, and I don't want to just focus on the negatives, but all of the really stupid ideas in history have also been done by people who passed exams and who could express themselves with confidence. So what I'm hoping for is, well, exactly what you just said about Scrimba, democratization, just letting people's natural ability come out. And I think interestingly, ChatGPT might be part of that purely for the effect that it has on how we deal with information and how that information is available to us. Hopefully, that will be a big catalyst for change in some way.

Alex Booker (30:49):
The reaction to GitHub Copilot, it was a little bit controversial, like you said, it sparked up a lot of debate, but frankly, it was lukewarm compared to what happened just a year later with ChatGPT. I think this has really got people's imaginations going, it's more widespread than just coding and beyond what people thought was imaginable. It's going to affect our world in many ways, I'm sure, I just don't know that we know exactly how yet, and that's one reason we're recording this episode and hopefully keeping on our toes ourselves. Maybe there's a way to get ahead of the curve a little bit, I don't know, tell us about the motivation behind your OpenAI course and what it's all about?

Tom Chant (31:25):
The OpenAI course that we just released at Scrimba, and is also on YouTube actually, is basically all about using the OpenAI API and bringing that into your applications so you've got all of the power of OpenAI right there in the application. And there are actually a million ways that you could use that, what I really wanted to show in the course, well, there were various things that I really wanted to show, but one of the most important was, how powerful that API is. Now as anyone who's worked with APIs, anyone who's been a web developer, anyone who's worked in tech-ed, I've probably seen millions of APIs, but I can't think of one that's as in insanely powerful as the OpenAI API, it just gives so much potential and there's so much you could do with it, it's hard to even scratch the surface.

(32:14):
But one thing that I really wanted to show is, how all of that is available to actually people who don't necessarily need to know all that much code. You do need for this course some JavaScript, it's written in vanilla JavaScript, all of the apps are in vanilla JavaScript, but it's not terribly complicated Vanilla JavaScript. If you've been studying JavaScript for a couple of months, you can do an awful lot in web applications with the OpenAI API. That was one motivation, was just to say, look, you want to get a sound understanding of what AI can do for web devs right now and you want to have that on your CV, you want it in your LinkedIn, you want it particularly in your profile projects, sorry, your portfolio projects. Then this course is for you because that's basically what all this is about and we'll show the foundations and then build some things and then you can let your imagination run wild.

Alex Booker (33:08):
What can someone do after watching your course that they couldn't before?

Tom Chant (33:12):
They can do whatever ChatGPT can do so they can generate language, and that's one key feature of the course. And I think when the new OpenAI models came out last year and everything suddenly went completely crazy, the reason why it got so much media attention was the quality of the language it could generate. This was human quality language, this was as if it was spoken by an adult native speaker, knowledgeable, confident as we've talked about. I think having the power to generate that language in applications is immense because you can create, copy, obviously, you can interact with users and building chatbots and having those chatbots interact with users either in general terms or on what we might talk about in a moment in more specific terms which are just related to your company.

(34:03):
I think that is some of the biggest uses of OpenAI in web dev, it's all about language generation. And on top of that, you've got things like sentiment analysis, so you can potentially tell the mood of the person that's interacting with your website, you can analyze their behavior more by analyzing their language. And I think one thing that web devs are going to face more and more as their careers progressed over the next couple of years, is for example, in e-commerce, but all over the place, the managers and the product designers are going to be wanting to harness the power of artificial intelligence to get more from the customers, more sales, for example, or more feedback or more ways that they can improve on their products. But I think AI is essentially going to bring about a bit of a revolution in that sense on its own because it's actually going to give web developers the power to analyze their users in ways that they haven't done before.

Alex Booker (35:07):
You make a fantastic point that ChatGPT is quite general, whereas there might be benefit in somehow tailoring it to your specific data. So if you're doing customer support, ChatGPT doesn't know anything really, ChatGPT is trained on public data. If you've got a huge business, then probably some of that data is what helped train ChatGPT. But for most of us, for example, ChatGPT isn't going to be able to answer questions about Scrimba and scrimba's billing or what courses we have coming up or anything like that, just has no idea.

(35:38):
Say for example, Scrimba wanted to add a feature, like a chatbot that let users ask it about upcoming courses or any question about Scrimba, what did Tom do before teaching at Scrimba? Well, he was a part-time English teacher, part-time developer for a bit. We could have it give users the same value that they get from ChatGPT just specific to Scrimba. The only way to do that is by Scrimba interacting with the OpenAI API, not only to expose the chatbot type of thing, but is there a way to fine tune and train the model as well based on some existing data?

Tom Chant (36:12):
Absolutely. You've got two options really, the first option is very basic and that is where you simply give some examples with your prompt. And what you can do, is you can include some data in those examples, so if you've just got a page of text, you could put that text in your example and the API will happily answer questions from that text. But that's not sustainable if you've got a massive body of data that you want to answer questions on, and that is where fine-tuning comes in. And what fine-tuning is, is basically this, you take your data and in your really good example of Scrimba, we would have a load of customer service data. We've got emails, tweets, anything people have sent us and we've answered that is a goldmine of data.

(37:00):
And what you have to do, is you need to take that data and you need to go through it pretty carefully, make sure everything is correct, it does need to be human verified, that's important. Once you start training the model on it, if you've got 40 data in there, you are obviously going to get 40 output, so you make sure that data is saying exactly what you want it to say. You then need to format it in the specific way the OpenAI API wants that data to be in, which is not a big problem by the way, they make a tool which does a lot of the work for you, although you do need to do some work yourself and it can be quite laborious.

(37:34):
And then you pass all of that data to the OpenAI API, it will go through a process of training one of its models specifically on that data, you can then ask it questions which is specific to your company and that might be something really basic like, what's your phone number or what are your opening hours or do you have shipping charges on your products and how do you handle customs for international orders? And then it could also be something where you get to see the generative human side of AI. So someone saying, well, look, I'm really unhappy with you, where's my order? You've lost my order. And at that point the chatbot can come in and say, I'm really sorry, and give a human side as well as them being able to say, here's our compensation policy.

(38:26):
In a way, fine-tuning I think is a really huge potential skill because it's that which really allows you to make AI your own and have it really do what really needs to be done. Because as you said, if you have a general purpose chatbot and we do build one in the course, it's a lot of fun and it does have its uses. It does depend obviously what company you are building websites for and what they want to do, but I think being able to really drill down into some specific data and get answers on that, that is a giant leap and I think it's going to be even more massive in the future. Of course, we're talking here about customer service data, but you could do it with statistics, you could do it with financial data, you can do all sorts of things.

Alex Booker (39:13):
Is it possible to fine-tune the model on the fly? Maybe things are changing fast, you just released a new feature now you're getting a bunch of new support tickets, maybe the AI model can fine-tune in real time. Or do you really have to make it a designated step where you also have that? Did you describe basically human input where you have to confirm it makes sense?

Tom Chant (39:32):
You're going to face the same problem that they have with, for example, ChatGPT, the model it's built on, so at the moment you'll be using either the GPT 3.5 or GPT four, that stopped training some time ago. So you can see that if you ask it, who's the president of the USA or a time limiting question like that, you might get out of date information. Now, when you're working with your own dataset, you can get around that problem because you don't actually need to go through the whole process every time you want to make a change, what you can do, is add some data to your dataset and then only tune that new data. So you are actually keeping all of the work you've already done, you are just adding to it.

(40:17):
That might cause you a problem if you've now got data in your open AI tuned dataset, which is outdated. If things have changed so much, that is going to cause you problems and you will have to start again. But the way we do it in the course is using the CLI tool that you get from open AI, so you're just working down there in the command line. But all of this stuff is pretty easy to automate, we don't do that in the course, but it is certainly perfectly plausible that you could automate that process and you could update your fine tune model every month, every six weeks, every day, whatever you prefer, so it is doable. But the thing you mentioned, having a human being check the data first is so important.

Alex Booker (41:01):
But when you say human check for data, does that mean you double check you're not feeding open AI something sensitive? Maybe if you're sending it a bunch of support emails, you shouldn't be sending it a support email with your biggest customer where you reveal their card number or something like that? Or is it more of a one thing, for example, a ChatGPT, is they have humans who align the model who steer it based on what open AI determined to be moral, it wants to steer the model away from certain phrases or things for various reasons? And even as a user, when you generate a prompt, it asks you to thumbs up, thumbs down, that kind of thing, which I assume the whole point is that, that feeds into it a little bit to help it be better. Which one of those are you describing from a human input point of view or is it maybe both?

Tom Chant (41:45):
It is a bit of both. I was thinking more about the former one, you want to go through that data, as you say, you don't want credit card details in there and private email addresses et cetera. But also, you want to make sure that whatever customer service agent dealt with that problem, the information they gave was correct. That's why that task is really important and it is really not an insignificant task because they do say that the amount of data that you should use to fine tune a model is pretty big, talking about hundreds or thousands or even tens of thousands of examples, so it is a big job.

Alex Booker (42:21):
To be very honest, that doesn't sound that big.

Tom Chant (42:24):
I suppose it sounded big to me because I was trying to create a data set to use in the course, and we used a much smaller data set. But it's true, if you are a company like Scrimba, you've been in business some years, you've got that, but it is a lot to have people check manually.

Alex Booker (42:37):
That's a great point. That's actually so wild because my instinct even today is to use an AI tool to help you. I know it's slightly different because one is using an AI tool, one is feeding the AI tool, but you could give ChatGPT some criteria, for example, remove any messages that include credit card numbers or whatever, and then you feed that into, but then you probably shouldn't be sharing that with OpenAI.

Tom Chant (42:58):
But you could have it do things like remove racist or offensive language. If you've had really angry customers hurling abuse, you wouldn't want to put that language into your model for fear that it's actually going to shove that language out the other end when your customer asks it something. So you probably can automate part of that job, but ultimately, it is going to have to be done by a human.

Alex Booker (43:21):
Absolutely brilliant. This is so exciting because we can all benefit from AI, however, this is how you really harness the power by dropping down a level utilizing the API so that you can add features just as mind-blowing as ChatGPT to your own applications and the best part is that your users won't even know for your using open AI. What are your thoughts about how AI and developers might coexist in the future? Maybe you think they won't coexist, maybe developers are redundant?

Tom Chant (43:52):
I actually don't think that AI is going to dominate the world of development, I think we're a whole epoch away from that. I think what we've got at the moment is an entirely new concept, and when we look at how that exists on the curve of its natural life, I think we're actually much further along the path than we might expect. And I'm saying that because we saw the internet evolve right in front of our eyes, it was changing all of the time and it started off really rope in and it developed into this really amazing thing.

(44:28):
I think there's been a lot of bad AI so far that most of us have not had to interact with, and what we're actually seeing now is the fruits of that. And I believe that the next decade or so, we are going to be tweaking the large language model idea and we're going to be getting better and better results, but we're going to be faced with the same problems we've got now. And as a news reader put it the other night, when you've got over how fantastic OpenAI is, you start to see how bad it is. That's a bit harsh because it's not bad, it's absolutely amazing, but it is not at the point where it's replacing people apart from in a very few specific circumstances.

(45:13):
Absolutely, we are still going to need coders and developers for a long time yet. Will the job change? Yes, but it's always been changing anyway, so it is not really anything to be scared of. In a way, I think it's becoming cooler because we're going to be able to add even more amazing things to websites, we're going to be able to do less of some of the really boring, laborious tasks. And I think we're going to have more time because our productivity will be better to actually do the job and to really improve the products that we are making. So I think at the moment, it's a win for developers, I'm going to say it's a win for developers. Now in the next epoch, that might change if we take a giant leap forward.

Alex Booker (45:58):
There's the caveat, I thought you're going to end on a really strong affirmative prediction.

Tom Chant (46:04):
In a way, that all comes down to something that we haven't really talked about, which is regulation, whether regulation is coming and how good that regulation will be. That's a big topic and it's one we don't really know too many answers to. My prediction, and it is a positive prediction, is that AI is big enough and scary enough to warrant regulation. I think governments are already getting on board with it, and that is actually going to protect us quite a lot in quite a lot of ways. I think somebody starting their career as a developer now will still be a developer when they retire, if they want to be a developer for all of that time, and of course, in those 40 odd years, the job is going to change immensely, but it was always going to change anyway.

Alex Booker (46:45):
Great points. The developer who started by writing COBOL isn't writing COBOL or coding mainframe computers and stuff anymore, they're in the cloud. For example, even more recently, developers who picked up one language like Pascal or Delphi or something, they're not using those anymore, things move on in terms of the technologies. But also, there's fundamental paradigm shifts like microservices, for example, the move to the cloud, there are many more to come, I'm sure AI will play its role. Am I going to make a prediction? I don't know, I'm really reluctant, but overall, I basically agree with you, Tom, I think this certainly in the near field, AI is going to augment developers.

(47:19):
And you can't exactly ask it to generate a whole program for you today, there still needs to be an element of verifying the correctness and assembling everything. I think after that, probably the role of the developer will change, you'll be rewarded a lot less for writing specific code and perhaps rewarded more for thinking logically for a problem set and maybe getting a step closer to the product side. Plus by the way, there are just emerging complexities, we have front-end developers here, and at Scrimba, we teach a lot of front-end development, it doesn't take much imagination to see how these tools can be very useful for generating snippets.

(47:52):
But as soon as the problem becomes sufficiently complex, like a complex front-end problem or sufficiently custom and new, like maybe you are coding up some infrastructure code or it's genuinely innovative, then AI can't make something that's never seen before. And so I think it might cause a bit of a bifurcation between developers who are assemblers, who assemble software and verify the correctness of ChatGPT and the components or blocks mixed with some prompt engineering. Whereas, as I said, they'll be a bifurcation where perhaps there's another category of developer by different job title that is a bit more focused on innovation and what I might call true engineering. Not getting into the whole job title thing because that's a different story, but think about a car mechanic versus the person working at McLaren engineering the engines, I think there's going to be a bit more of a divide there.

Tom Chant (48:42):
I think that's quite plausible, but I still believe that we're away off that just because right now I don't think that the code production is good enough for assemblers to just assemble. I think they're still going to need a lot of engineering knowledge.

Alex Booker (48:59):
But that's the crazy thing about AI and the way these models train, it's for their exponential. So it goes from two to four, four to eight, eight to 16, that's the whole thing with AI, once it starts, you can't pull it back.

Tom Chant (49:12):
No, it's true. The large language model idea does have built in limitations essentially. But who knows where we are in terms of how far we've got to go down this road? I think one thing that I would say now to somebody who's thinking, shall I become a developer? Shall I go down this road, commit to all the training, et cetera? I would say open up, for example, a Scrimb, or you could do it in any other online editor, but a Scrimb would be great for this, allow yourself just really basic knowledge, you know that HTML, CSS and JavaScript exist. Pretend you know nothing else, try and build an app using just ChatGPT, I think you are going to find that, you ask it to give you code, it gives you code, you paste it in, you get some errors, you pass the errors back to ChatGPT.

(50:01):
But I think as soon as you try to do anything complex, you are going to get yourself in quite a pickle and you are going to find that you have to have a lot of coding knowledge. And it's not just any old coding knowledge, it's doing something which is actually one of the hardest things for developers to do, is understanding code that was not written by you. And I think when you do that, you are going to feel a bit more secure in your job choices because I think you're going to realize actually how far ChatGPT needs to go if it wants to replace developers and all other AI for that matter. I hope I don't live to regret that, but I don't think I will.

Alex Booker (50:36):
It's okay, they're just words and we're not telling anybody to do anything. The only thing I'm going to tell you to do if you're listening, is take to social media and tell us what you think is the future involving AI and development, will it augment, replace, or maybe you have a totally different prediction or imagination about what could happen? Let Tom and I know, both our Twitter handles will be in the show notes. And with that, Tom, thank you so much for joining me on the Scrimba Podcast, it's been a pleasure.

Tom Chant (51:02):
Absolute pleasure to be here, I've really enjoyed it.

Jan Arsenovic (51:04):
And if you made it this far, please consider subscribing. We are a weekly show, and there will be a new episode in your feed next Tuesday. This was the Scrimba Podcast episode 118, thanks for listening.

OpenAI for Developers: How to Use AI for Coding, with Tom Chant
Broadcast by