Episodes / #27

Talking Data, Analytics, and AI with Mike Carlo

July 19, 2025 Β· 29:08
Guests: Mike Carlo

We're joined today by the legendary Mike Carlo talking about all things data and BI. Come in and say hi!

Topics Covered

AI

About This Episode

We’re joined today by the legendary Mike Carlo talking about all things data and BI. Come in and say hi!

Watch

Embedded video and links available on the episode page.

**[00:00:00]** We're here with Mike Carlo again. Thanks, Mike. Thank you for joining us. >> Hello. I'm happy to be here again. This is just super fun. I love talking about web stuff. I am all in on appdev stuff these days. And we had a conversation a while ago that I actually need to come back to you with with some results from my hypothesis about AI things. So, we'll maybe we'll get into that in a little bit, but remind me at some point in time. Prompt me when you're ready. I'd love to share with you some of my um conversation returns from what we talked about earlier. >> Oh, fantastic. Yes. And so much has changed from that conversation. Like it was so recent and it feels like every two days. I'm like >> what how is this? How does it change so fast? It's >> I can't keep up. >> It's ridiculous. >> It's incredibly fast. So what So what are you seeing? What changes are you seeing already? What kind of things are exciting you? cloud code. >> Oh, I've heard about this. I've seen it. I haven't played with it yet. So, the fact that you mentioned it, I love the fact that you're just pointing this out because now I have to be like, "Okay, now one more thing I got to figure out." >> I was using cursor. >> Yes. >> Right. So, cursor was the thing I was testing out. >> Mhm. >> And now you're saying Claude code. Okay. >> So, cursor has been leaprogged by cloud code. Now, >> not for everyone, I will say. >> Sure. It is mind-blowing. >> Yeah, >> I'll explain why it it is terminal based. >> So, I'll say that as well. Okay. **[00:02:00]** So, >> if we get into it, cursor, >> as we talked about last time, it's great because you originally had VS Code. People love VS Code. You code in it. That's just a for those who don't program. It's just a place where you write your code. You have your files. Everything's right there. >> You can do your source control, etc. Now, Cursor came in and they're like, "Okay, so let's do a fork, basically copy VS Code, which is allowed because it's open source." And they said, "Okay, but we're going to embed some of these AI features into it." So now you have this sidebar that is deeply integrated to the app. And so it's great because you're you're talking to it. It can with the agent mode, it can modify your files, look between files. Amazing. I love agent mode. So good. And one of the benefits we talked about last time with with cursor versus like going to chatgpt going to anthropic going to gemini you don't have to be copying and pasting code into chat somewhere else. It just understand sort of the context and you can tag parts of it. >> Now the problem with cursor as again we discussed last time is that sometimes it it goes haywire with like big code bases. So if you have a large file it starts to be like well the context gets compressed. disorder doesn't understand what's going on and it'll give you some weird advice and so you have to clear it and and and all that stuff now >> been there have done that >> yeah so much and so ah you start hearing about well now GitHub copilot has access to more of the repo should I go back to that **[00:04:00]** etc etc but it's still different workflow and then comes cloud code so for those who don't know anthropics claude has sort of leapfrogged other LLMs in terms of coding. It's very good at certain coding tasks. >> And so they come out with this thing called cloud code which is a terminalbased chat interface especially tailored to coding. >> But it connects to the repository and can understand a full large repository. and give you insights on it and just work directly with it. So, yes, it's a terminal based, but you can add there's an add-on for VS Code, cursor, etc. So, you can put it there. So, it'll work sort of like a sidebar. It's a little different the way the commands work, but regardless, it it's very similar. And it is ridiculous. If you go on it, it's like you can actually tell it this. >> So, here's a code base I've never worked with. Please explain it to me. and it'll go in, it'll analyze everything and then it'll give you documentation on like this is what it does, these are the features, these are the issues in the repository. It'll just go through everything and you can actually tell it in its agent mode, >> okay, so let's work on issue uh 534. And it'll go in, it'll read the code, it'll read the issue, it'll understand what's supposed to do, and it'll start working fixing the issue. And then it'll do the PR. What? And then it'll commit. It'll run the tests. It'll do everything. So it can just start getting on your issue list. >> This is insane. >> Ridiculous. It's It's so cool. And so yes, command line is nice in this regard because command line allows you to I again **[00:06:00]** I don't know but like command line feels to me like this is something you would want to put in like I'm going to do a check-in to a repo, right? So command line allows you to do some of the things like I'm going to command a bunch of things and so in that instance you could actually have like a pull request review using cloud code, right? So that to me that feels so we internally our company is starting to look at okay we're starting to find a little bit of value from using co-pilot reviewing your pull requests. So with you know if you're using GitHub copilot's already built into it GitHub has its own co-pilot. One of the things that I think is lacking I think to what your point here Armando is you really do want to be able to plug in any kind of model like this is and we're getting to the point where this is getting chaotic like these things are growing so quickly. Mhm. >> You you're playing with Claude Code. I was playing with Gro 4 >> that came out and so I I don't have the premium subscription to that cuz I don't want to pay like the $200 or $300 a month for the subscription. So I was literally just copying and pasting code from the browser window into asking questions. I asked So okay, this is great transitioning. Last time we talked, we talked about the ability to build an app and throw it away. >> Yes. One of my employees reached out to me and said, "Mike, I spent about 45 minutes building an entire JSON parsing app. We created it, we tested it, we used it. Wasn't perfect. Did some things a little bit **[00:08:00]** weird, but it did the job >> and we were done. So, we were able to build like a little mini app for a client that had a very specific request. We knew how to mechanically do it. We were just like, we didn't have all the parsing done." And it was the >> technical pieces that we like I don't really want to build all that cuz I don't want to write all the code. Well, with 30 to 45 minutes of time, we were able to like stub out the app and then use it, check it into our git repo, give it to the client and say here's an app you should use. And we can even talk to it and say app, write instructions for yourself how to use it. So here's the instructions in markdown written out. Do this, do this, do this, do this. And then you can refine them and tweak it and then give it boom done. This was my first aha moment of we have just built a throwaway app that we talked about. It's going to be so specific. It's going to be so good. You describe what you want. Now, >> in addition to this, they did this test. I was like, "Oh, interesting. Okay, it's a Friday night. I'm got nothing going on. It's the evening. I got a little bit of downtime. I'm going to build myself a game." >> So, there is um apparently there's a game called Cows and Bulls, right? So, it's it's basically the idea. So, have you ever played the um the game where you guess the letters? It's like a word. It's the Wle I think is what it's called. So, Wordle. >> So, it's like a Wordle game. And when **[00:10:00]** I was a kid back in high school, even I think it was early middle school, I had a TI 82 calculator and you could program them. And so, one of the fun things to do was like write these little mini programs and start to learn how to code. Well, one of the games that I programmed was a very early like it's a 1960s remake of a game called uh it's like it's it's cows and bulls I think is the name of the game, but I called it Pico Ferme. Okay, >> Pico being uh you have the right number. It was all numbers based. So, you would say okay specify how many digits you want to play with. You know, three to four digits, whatever that thing is. You specify the number of digits and then that's how many it would pick a number. >> Okay? And then from the number it would then build okay now you have a randomly selected number and then you specify you type in your guess of the number and then it will then analyze your number and say okay pico the the digit is correct but in the wrong place. Furme means one of the digits is in the right place but in uh and it is the right number. So you kind of get two things and all it would do it was out it would only output to you just words. It wouldn't show you which digit. It wouldn't tell you which dish it. It would just say, "Okay, you you know, for example, the number is 4 5 6 and I guess 1 2 3 4, right?" It would say, "Okay, the four is in the wrong position." It would just say the word pico. So, **[00:12:00]** one of the digits in my list would be right, but in the wrong position. So, I basically used that kind of concept and I added the thing basically build Wordle for numbers. pick out my digits, make a number, you put the number in and it would color it green or red based on the right position of the number. And I was just talking to it, move the submit button here, add this. And then I said I actually tried to go a little bit creative and say, "Hey, AI, I think I was doing um I think I was using co-pilot." No, it was it was Grock. So I was Grock 4. I went to it. I said, "Create an interesting scoring metric." And then I gave it the rules of scoring. I said if you get the the score could only max out at the actual number that it randomly selected, right? So the if that's the maximum score you could get ever. >> Okay? >> And then you get deductions on, you know, how many guesses. So each guess would take the number, divide it in half. So more guesses would continually decrease your score, right? And then I said, well, if you get a digit in the right place at the right position, that that keeps the score. It doesn't really you don't get a deduction for that. But if you had the wrong digit in the wrong place, you get a deduction. And if you have the wrong digit, sorry, the right digit in the right pl wrong place, that would be a little deduction. A big deduction would be the wrong digit in the wrong place. So what happens the number basically the score would continue burn down closer and closer to **[00:14:00]** zero. the more guesses, the more wrong things you guess. So, so try to basically incentivize you by getting less deductions. So, I just basically gave it the rules of what I wanted it to do. >> Kind of complex and said, think about it. And it spent about 4 minutes thinking through, okay, I could do it like this. Okay, this is here. We probably don't want repeating numbers because repeating numbers would throw things off because then you'd get in the right place in the wrong spot. So, it made its own rule. It thought about repeating numbers or duplicate numbers and it said I don't want this and it updated the code to now make sure that the random digit numbers would not ever repeat random digits that would repeat a number. I thought wow that was pretty good. It actually built the experience. Anyways, all this to say is I spent probably about an hour communicating with the computer program. I had a vision of a game that I had built as a kid. I talked to the computer using English and it built the whole thing and then I could just continually refine it and I had this and then the next thing was like okay now that I've got this app working kind of like you know I wanted to it was all an HTML file with some JavaScript in it and I was thinking the next step would be like okay I want to land this as like a legit game on a website and just put it to the world and be like hey here's your equivalent of a Wordle that's now just numbers right just do it and see what happens >> so it was just very interesting to see how **[00:16:00]** that kind of played out. >> That's amazing. And also like next step, right? You're like, "Well, where do I put it? Do I put it on a website?" Yeah, you could, of course. But then there's all these mini apps like you were talking about last time on Facebook and stuff >> in YouTube, right? >> Could I just do this? Could I just make it a mini game that goes there? Like >> why not? Like >> a few years ago or even few months ago, I would say I would never ever try to figure out how those mini games are built, right? But now you just >> don't care. Look at this game now. Figure out the documentation. Make it a YouTube mini game. >> That's what I want. And then and now you can start pumping out these games like super fast and like cranking them out to all these different. So me one who's not super code and technical can now build something that's interesting to me, the number game, right? Whatever that is. And now I'm also seeing like again I'm I'm following Grock 4 cuz that is just something I just kind of stumbled onto. I'm seeing people build fullon like dungeon crawler kind of games using like sprites and other things and like animating and again you're just basically specifying the rules of these the game and they're getting so good that it can just build the game for you. So like what does this mean? This is making apps a commodity. And I know we talked about this last time but I want to reemphasize Microsoft was spot on with everyone needs to build apps. 100% spot on. They missed the mark with Power Apps being the way we're going **[00:18:00]** to do it. It's not going to be that way. It's going to be large language models. It's going to be communicating to these things and it's just going to build all the code for you and you're just going to ask very specific questions. And with like claw code and and grock 4, it's going to get so good, even if it does have an error, there's going to be enough signals sent back to the AI that when it does make a coding mistake or an error, it will know how to fix itself. It's crazy incredible what it's doing now. >> And something you were talking about last time about like, oh, I I'll give it a plan or like tell it to make a plan first, especially in those little cases where you're like, uh, it's not dealing with it properly. you tell it to make a plan and now they're sort of doing it themselves. So now these agents are like, "All right, let me plan this out." And they'll create their plan and show you the plan and then go step by step. And so all that's getting embedded into >> the thing, >> the experience. >> Yeah, >> it's blowing my mind. Absolutely blowing my mind. So cool. >> The the other day I I found another very interesting use case. I don't know if you probably had this as well. So when coding for those those of you who don't code you have to focus right you need to be in front of your screen you need to be thinking of many abstractions and figuring things out and make sure you use the right names for things and what variable name did I use up here and what so if you're distracted **[00:20:00]** by say I don't know random thing a toddler perhaps in the same room right there's not much you can do um like your code is going to be a mess Right. And so the other day I had a very interesting experience which is I was at my in-laws and the girls were there and they were supposed to be with my in-laws but no of course they were in the office with me playing around and screaming and doing things right and so I had to do something. It wasn't crucial, like nothing really really really important, but I had to do something and I I obviously wasn't going to get any real coding work done, but then I thought, well, I have to build this little module and so I'll I'll I'll do it with the LMS, right? And so I just started chatting with the thing while I was interacting, right? And it built exactly what I needed and it worked perfectly after a while. And I didn't have to be looking at every single thing because I was just telling it what to do, what to fix, how to adjust it. Then I run the tests and everything where it's like, oh, this is this is something interesting. Like if you have a scenario where you have to be at a coffee shop, you have to be somewhere, there's too many distractions at that moment, you have no way to get out of it. um you can still get stuff done with it which is very very interesting >> and it's it's not even that it's it's a matter of it's also a matter of you know you're you've moved yourself into a role of directing >> what's going on right so you're directing the **[00:22:00]** vision and so again back to our kind of again our last conversation was around >> we have this incredibly smart very knowledgeable every programming language every pattern all kinds of functions everything it knows knows how to write every little thing. No person can even compete with that from just a pure volume of knowledge standpoint. That that's something that's and it's becoming even better as it goes. It's getting better stuff. And uh I was listening to Elon Musk talk about the Gro 4 and he was saying in the next year we're going to start seeing AI starting to produce like new scientific n new scientific discoveries on things. I'm like nah I don't know. We'll see. I'll wait till that one really happens. sounds a little bit, you know, uh, new, but I mean, as I'm looking at code things, I mean, when is the point when I mean, I already know in my communication with Microsoft employees, they're heavily using AI to generate things and they are increasing the speed of which they're producing products by using AI to help them produce new code. So this this whole layer of like being able to direct and you you actually kind of need to let the AI think for a bit, walk away, give it some time to like write the code, but it's even in this situation it it probably wrote what hundreds of lines of code, tens of lines, like it wrote those things and those are physical things you just don't have to type out. Um, another book I read, it was um, Power Pivot Pro, Rob Collie for for PowerBI and and Power Pivot or something like that or something language there. It's it's a good book. >> Um, but he talked **[00:24:00]** a lot about in this book. There was a one story that really resonated with me, which is the amount of communication that needs to occur between employees when working on a project. And what happens now with AI is I'm directly communicating with the engineer and directing what they're doing. And I don't need to give understanding context to other people now. It's still just me working on the project, but I've abstracted one layer away of the technical debt out of my hands and put it into the hands of the AI. So the the cautionary tale here is how do we know what it wrote is good and there's not a bug introduced and I think that's still where the experts need to be in place and I think um I think maybe we talked about this but but the pyramid of skills right there's a very there was a very widebased pyramid initially for junior developers to build the grunt work of a lot of things. >> Mhm. >> That pyramid is getting much narrower now. So, if you're not the one using the AI to generate the things, if you're not the one understanding how to direct the AI to build the stuff, I think that's going to be a much more needed skill here is directing and explaining. And Armando, I'm I'm going to be the first to admit, I don't know about you and your family, I'm I'm not a good describer of what I am trying to think about. my family will now ask questions of me and I'm physically seeing myself think about I don't think I prompted my kids the right way with the instructions like I don't think I prompted my wife the right way with what I was **[00:26:00]** asking about and so now pausing in my real life conversations and say I need more context pause like I heard what you said I don't understand can you give me some more context and it's being specific in your details as you write to the large language models. It's changing again. We we said this last time, it's going to change how we think about things. It's going to change how we want to communicate. I'm used to typing a oneline bit of text into Google and then scanning through results until I find the answer. >> Those days I think are gone. I don't think we're there anymore. >> So, anyways, I'll just pause. I've said a lot of things. I What's your reaction? What are your thoughts? >> I I think you're on point. It's the other day someone was saying like you're telling it to do something and >> it doesn't work and if you type in >> it didn't work. >> Whose fault is that? >> Of course. >> Is it is it the is the AI's fault or is it actually your fault for doing like I'm I'm the I'm the weak link in this now. Like it's my fault. >> Yes. And and if you do type it doesn't work, it'll think and it'll go and maybe it'll prompt and etc. But at the end of the day, you're right. It's whose fault is it at the end of the day? you you did you tell it properly? So, why don't you ask a better question or or describe it better? Instead of saying it doesn't work, say, >> "Well, I did this. The button did show up. It's just has the wrong underline or the wrong border. Can you adjust it?" And **[00:28:00]** it'll fix it in a second or less. And it it goes back to this thing like if you're talking to someone Yeah. if you're talking to someone you're many maybe some people will do this but it's like if they come with you a real person that they just did a bunch of work and you just say it doesn't work. >> Do you think that'll help? Not really. But if you explain it the same way, hey, yes, >> the border was incorrect, but the button did show up. Then the person will say, "Oh, sure. Let me let me fix it." And it's it's the exact same thing. So yeah. Yeah. I think you're you're right right on track. Yeah. And there's another point I want to make here as well. Like I'm again I'm I'm always on the the Twitter and the LinkedIn the YouTube feeds of things and someone was saying they're now finding that more value in using images and prompts with their prompts. Add this image with the prompt. And so I'm also thinking about this in the same way like when I work with my employees about building things and building technology pieces. Mh. >> Oh, if I can go to Figma and stub out a diagram or an image or like a flow of like what I'm Dude, it's it's so much more helpful to my employees when I'm working with them. And so now what we should be again try it out again. We'll have to have this conversation in like another month again, but like try it out. Go use images, build things in other tools, stub out what you think you want, and go back to the AI and say, "I want it to look like this. here's **[00:30:00]** how these things function and actually diagram out an image. Pass that in and it's the same the same adment, right? It's the um an image is worth a thousand words or a picture is worth a thousand words, something like that, right? Yeah, >> definitely. >> I'm I'm changing that too. So, I'm I'm actually taking habits that I've been forming with my team and bringing them out to the large language model and saying these are good things we should be doing. We should be prompting more with images and diagrams in addition to what you want things to be doing. and it will get closer to the answer. To your point, the button. >> Mhm. >> The the highlighting's wrong. >> The the the shading's wrong. It doesn't highlight correctly. The wrong colors. If you >> show a screenshot, >> how are you going to know? >> Yeah. Yeah. Now, I will say two words. Figma MCP. >> Oh, I've heard we've been I've been I showed my developer this. I know about this. Again, it's now large language models for your your F. Have you tried it? Have you played with it yet? >> I have not. I have not. I saw it and it's >> admittedly I have not either. >> Well, and I've not only is it a ridiculous idea, I think it works really well. >> Of course. Of course. >> So, like I can describe again now I can prompt. So, this is crazy. And the other thing I heard this week, this last week, I think it was this week, they said once you're done with a prompt, after it does the result, you go back to the AI and say you now a are a prompt engineer. take that last **[00:32:00]** statement and rewrite it in a better prompt and make the AI rewrite it in a way that it understands how to write the prompt. So that way you get the output of a better prompt and then you can start learning from that as well and change how you think about prompting the computer to begin with. >> Yes. Yes. It's It's getting to a very weird place, but it's so good because like the Figma thing is it's not only going to see the visual like if you took a screenshot that's enough most of the cases, but with the Figma MCP now it will understand the whole tree and see where things are supposed to be play and the context of how the thing and so >> we'll have to do another conversation just on MCP because Yeah, it's totally new ball game and and I'm seeing this this is creeping out in the Microsoft world as well. Again, I'm very Microsoftentric. So, I'm seeing MCPs starting to come out from Microsoft on top of their existing data products. So if you're in Microsoft Fabric and you're using things there, they're now announcing, hey, these MPC things are now showing up and they're again back to the whole idea of this, there are now Microsoft MVPs speaking about what the MPC is doing, how to build your own MCP, and how to use the MCP on top of other stuff. So there's like this and one of these other gentleman's name is Kurt Beller, and he's really smart. He's actually a PhD, really genius guy. And so I think this whole MCP world has just like bit him. Like it's like he loves it and great because now I'm learning from him in this space. >> But **[00:34:00]** he's talking about like how he can generate an MCP. But MCPs at like talking to an API layer. >> Mhm. >> That's that's like ground that's like table stakes. That's like ground level one. But it's when you start stitching multiple MCPs together, when you start using collections of API calls together and then can use those in concert with other things, that's when things really start lighting up. And you're starting to do things now in fractions of a moment that you used to take a long time to build or design. >> Anyways, it's kind of just really interesting to see how this is very rapidly evolving. And I don't think we're gonna slow down on this, at least for a year or two now, until things start settling. At some point, >> it's the same way with like computer processing power, I guess, right? Every year they would double. >> Yeah. >> What was it? What's the law? There's a law there. Uh there was like um years, I think. >> Yeah. Every couple every year it was like doubling in compute power or something like that. Um but then it kind of plateaued, right? So then it kind of like hit its maximum. And so then we started going from like single processor compute cuz you can only make the wires on the processor so small. You can only add so many transistors. There's only so much small stuff you can build. At some point they said, "Okay, now we have to build more computers, multiple CPUs that communicate together." And that's kind of where we're at now. >> Yes. >> Crazy. >> Yes. And uh very interested in talking about what we can do. I'm sure in our next conversation we'll talk about us **[00:36:00]** actually using a lot of these MCPs and and I saw actually I did see I was looking for an MCP for a tool I remember which tool right now but that I needed and I looked at the GitHub repo and it was made by a Microsoft team. Mhm. >> So, so what you're saying, yeah, it's real. Like they're building for their own tools, but they're also building for other people's tools because they know that their clientele wants to start integrating with with the LLM. So, >> that's what it was. It's Moore's law is the is the compute uh the CPU compute doubling law. It's not a standard, but it was kind like a a regular every couple years it was like doubling the amount of compute power we were able to have. and you know architectures got better, things got more efficient and now it kind of started plateauing at some point. So we're not I think we're not at that place for large language models. I think we're still on the growth part of this. We're still on the exponential growth side of things. And so every it feels like you know not a year it's like every month or every two months we're getting huge leaps and bounds effectiveness on top of these things. It's just it's a very fun time to be in technology. I think um I also feel like I'm also drowning to some degree >> because it's just happening faster than I can keep track of it. >> So fast like today all the news about the new Chad GPT agent. >> Yes. >> Right. That now it'll it'll go in the computer and do things for you and plan calendars and like you were saying it's slower but **[00:38:00]** it's taking its time >> but it's doing things that were sort of get out of the chat, right? And so that's going to be very interesting to talk about in a few days as well. We've got to wrap it up right now because >> my girls are arriving. But literally the car is driving right in. But thanks so much for joining us today, Mike. And we'll talk in the next one. And everyone who's live, thank you for listening as well. And we'll talk soon. >> Appreciate you all. Thank you for so much. See you next time for another quick chat around AI things. >> See you. Thanks. >> Bye.