AI's Wild Winter: OpenAI, Google, and DeepSeek Battle for the Future

Show notes

Exploring OpenAI's New O3 Mini: A Game-Changer in AI Development

In this episode of Human Core AI, we discuss the latest advancements in artificial intelligence, focusing on OpenAI's recently announced O3 Mini model. We delve into the significance of this release, its capabilities, and its potential applications, particularly in code generation and reasoning. We also talk about the fast-paced progression of AI technology, recount OpenAI's other recent releases like the OpenAI Operator, and touch on developments from competitors like Google's Gemini 2 and the Chinese AI contender with Qwen 2.5.

00:00:00 Welcome to Human Core AI 00:00:24 OpenAI's O3 Mini Model: A Game Changer 00:03:18 Understanding O3 Mini's Capabilities 00:07:50 O3 Mini in Action: Use Cases and Features 00:19:11 OpenAI Operator: The First Agentic AI 00:28:32 Google's Gemini 2 and Image Generation 00:36:35 DeepSeek and the Competitive AI Landscape 00:40:00 The Future of AI and Human Agency 00:45:59 Closing Thoughts and Community Engagement

Show transcript

00:00:00: Hello and welcome fellow humans. I am Mati and this is Human Core AI where we talk about

00:00:10: the latest developments in AI and analyze what is happening, what's going on and give

00:00:16: you updates and thoughts on what are the latest and greatest news coming up from the AI industry

00:00:29: and what are the reactions of the community. So let's dive into the probably biggest announcement

00:00:42: of the last few days so far which has been OpenAI announcing the 03 mini model. Why has

00:00:53: it been so interesting? So first to set up a little bit the context here.

00:00:59: We are talking about this model being released by I think it was last day of January 2025

00:01:09: and just a few days and weeks ago there were so many other announcements coming from OpenAI

00:01:16: so they have been really ramping up their development speed and the product releases

00:01:23: but of course a new model being released is something that everyone is waiting and expecting

00:01:30: and especially because until very recently I think it was rather a news and everyone

00:01:39: was waiting for that day that the new model would drop and I see kind of I get real acceleration

00:01:49: these days and on how fast OpenAI is being able to deliver new models and I mean if we

00:01:57: put OpenAI together with all the other companies, most other companies releasing really lots

00:02:05: of new features models, applications of AI it's really being you know that there is

00:02:15: that saying I think that there are some decades where nothing happens and then you have days

00:02:23: where everything happens and decades happen. So it feels this way especially if you are

00:02:29: into AI and LLMs and machine learning it's really breathtaking the pace of advancements

00:02:39: and looks like it's going to accelerate only more from here because like everyone is betting

00:02:45: on maybe just maybe at some point the LLMs themselves will build themselves and accelerate

00:02:56: the growth even further so I think already this is happening and we will talk about some

00:03:03: of the things but already some AI researchers have been pointing at models learning from

00:03:12: synthetic data based on other models but let's circle back to the O3 mini release.

00:03:18: First of all what is O3 mini? The mini series are the O1 so the previous model so the mini

00:03:29: version is like supposed to be a bit of a less intelligent but very fast and very cheap

00:03:39: version of the big brother. In this case OpenAI decided to call the first version of the

00:03:47: thinking model of the deep thinking model O1 and O1 mini was the counterpart that was

00:03:54: faster a bit less smart because it doesn't have that much time and compute to be as smart

00:04:01: but cheap and very good for certain applications where you just want to have a faster iteration

00:04:07: right but at the same time the O1 turned out to be a model that really pushes the boundaries

00:04:21: of what is possible with O1 Pro being a version that gets even further with the thought process

00:04:28: and everything so O3 comes into play and first of all O3 and not O2 why I don't know maybe

00:04:41: you've heard about the company called O2 so if that currently no one had to open AI

00:04:46: thought about the next generation of their models once they pass you know O1 which is

00:04:56: certainly something interesting I personally observe that I think the OpenAI is extremely

00:05:04: good at creating new AI models they're very good at you know probably hiring top notch experts in AI

00:05:14: and LLMs and you know data scientists and people who are very very deep into that but then you notice

00:05:22: an AI as a product person notice that the products that they build are not always perfect and the

00:05:32: naming of things and how you name a model in LLM context is extremely important so I would think

00:05:40: that maybe just asking maybe JetGPT that they would come up with a better name than O1 in the

00:05:48: first place and now we get O3 but it's not a jump two generations ahead and they're just skipping O2

00:05:54: no O3 was supposed to be O2 it will never be O2 because copyright and loss

00:06:03: so we have O3 and O3 is supposed to be on the let's say on the smart and intelligent scale

00:06:12: it's going to push the boundaries much much further and probably it's going to become

00:06:17: a so-called frontier model so or probably the frontier model for a while it's probably going

00:06:26: to push the envelope for at least a few weeks or months if we see the pace of development

00:06:32: but that's the goal right now to create first of all the most intelligent the best model

00:06:40: for deep thinking but it comes at the expense of time and compute so and here enters O3 mini

00:06:50: and O3 mini was released before O3 probably for practical reasons to build up maybe a little bit

00:06:58: of hype proceeding the big release of the big model but O3 mini also probably was and that's

00:07:08: my interpretation was easier to to test and verify because of the faster iterations so we have a

00:07:15: model like O3 mini that is faster than the big one so you can iterate much more and you can get

00:07:23: tests much more quicker and test results of course with O3 it's a different game maybe we'll talk

00:07:30: about that later but O3 mini it is fast whatever that means but it seems to be just faster at

00:07:42: general reasoning than a one mini used to be and it's fast enough to allow kind of a conversation

00:07:50: and I think a very interesting use case for this model specifically is

00:07:57: using it as a code generating model in in IDEs that use LLMs to generate code for example you

00:08:05: have cursor you have wins those kind of IDEs that have a so-called composer that is kind of like a

00:08:14: senior coder that you you have to implement code changes and I think O3 mini fits into that niche

00:08:24: very well particularly well because of the speed the very good scores at programming and

00:08:34: and reasoning tests and also which is something that maybe not everyone knows about but it's I

00:08:42: think the first model that OpenAI is introducing that has some native support for things like

00:08:48: function calling and for I think consistent structure it's called or something like that

00:08:54: it's it's a capability of the LLM to be easier to interact and interface with from traditional

00:09:03: code so imagine the situation where you have a you have an API that requires of course a strict

00:09:14: format to give it so you can take an LLM ask it hey please could you give me this and that response

00:09:23: using this format and that format many times being JSON for example I don't use JSON for

00:09:31: as an input to the API you can talk to the API so you can start doing external stuff access I

00:09:39: don't know let's say the most common sample example here would be you want to know the

00:09:46: weather in your location and you can just ask the LLM to craft the API request and you can send it to

00:09:55: to the API and get responses what O3 mini can do is call those functions and also you can specify

00:10:06: specific format and it will try very hard to stick with that format and I think actually OpenAI is

00:10:14: mentioning that you have certain guarantees as a programmer or user of this LLM that when you

00:10:20: ask it to keep certain format it will actually keep that format so it reduces the let's say the space

00:10:29: for hallucinations or interpretation or leaving out certain parts which is something very interesting

00:10:36: that LLMs of course do and I don't know if you are familiar but let's let's digress a little bit

00:10:42: so LLMs can generate lots of information but they always decide where to put the focus on

00:10:52: and if they consider that parts of what they're talking about are not as important for

00:10:58: in the context that they are talking then they might omit stuff they might just shorten it more

00:11:05: than other parts so sometimes you will have this interesting phenomenon of some parts of the document

00:11:12: being extremely accurate and some being for example my personal experience is that if I am

00:11:19: trying to get the best results out of an LLM being specific and being focused on just one topic at a

00:11:28: time and generating shorter texts instead of one big bible is going to get much better results but

00:11:38: that's just a small digression here on LLMs so the programming capabilities are very important

00:11:46: for O3 mini and probably will serve certain use cases very very well and then is one of the highlights

00:11:56: I guess for myself and most users who might not be programmers or might not use the LLM just for

00:12:04: programming and it's the search integration combined with reasoning right so O3 mini by default

00:12:13: is a reasoning model past one but reasoning model and now you can enable web search so

00:12:24: you can basically you can now have it search and then reason on the search results and provide you

00:12:33: with you know reasoning but pulling in data from the internet and I've used the 4.0 model before

00:12:45: which also has search capabilities but I think O3 mini just does not even need to compare itself to

00:12:54: 4.0 it's so so much better and one particular use case I liked that I discovered when playing around

00:13:01: with it is you can ask it to tell you to give you like a news coverage on a specific topic

00:13:09: something happened recently and you just ask it please tell me more about this and in an instant

00:13:16: it can pull because it's really fast like I don't know search results are instant and then

00:13:22: reasoning happens in a few seconds and then boom you have it the search results at the overall

00:13:30: result is really satisfying so what I had done is I yeah I sent I asked it to tell me more about

00:13:40: this thing that happened and give me the press or news coverage and give me the community reaction

00:13:48: or social reactions let's say and it pulls in from different sources at the same time and then

00:13:54: in a few seconds you have something that otherwise would take you many minutes if not an hour

00:14:01: of you know going through different news outlets seeing what they report on this topic of course

00:14:09: when you ask for it you ask it to give you the links straight to the full article so you can

00:14:14: when you're interested in you know specific news outlet it's covering the topic you can just jump

00:14:22: into it it's it's fascinating how well it works and then the community aspect it can apparently

00:14:28: find information from Twitter pretty well ex and from other places so I am extremely

00:14:38: happy with this feature specifically it makes my life easier even despite it having just been a few

00:14:45: days and actually this very episode was partially supported with research done by all three mini

00:14:53: so I was testing all three mini while preparing this episode and and using all three mini for

00:14:59: for that purpose to expand for example on on on the research of some topics really really cool so

00:15:08: yeah the community reception has been overall positive and one interesting thing that happened

00:15:17: and that was because of a post on ex by Sam Oldman of Open AI fame he tweeted or ex on the topic on

00:15:31: of human or the so-called humanities last exam which is an amazing name so what is humanity's

00:15:39: last exam it is a test that was specifically created to test in extreme conditions the performance and

00:15:48: intelligence of LLMs or the knowledge of LLMs this test consists of I think 3000

00:15:57: questions very difficult questions crafted by top PhDs of the world and the goal here is to test how

00:16:06: close are we getting to 100% and like until recently the very best models were at maybe

00:16:14: 9% or something I think that actually for a while the R1 from deep seek was at 9.4%

00:16:24: and only to be overtaken by all three mini in its two variants actually medium and high which are

00:16:30: medium is like the regular thinking and high is a bit more of thought they put into the

00:16:36: or more thought type and and Sam Oldman was talking about this this test and how all three mini

00:16:49: is right now the top performing LLM but it was only slightly better than the previous models

00:16:56: but they understood the the tweet only after the deep research feature has been released which is

00:17:06: very very new very fresh news that we will cover in the next episode or if you want to check out

00:17:13: my youtube channel I have already posted a specific video on on the topic of of deep research by

00:17:22: open AI but with deep research we might be seeing what O3 the big brother of O3 mini will be will

00:17:29: be because we see I think 26% of accuracy or result positive result on this test as compared to the

00:17:42: previous winner which was O3 mini with 13% so it's almost doubling the doubling basically

00:17:50: to the test result and we are very very quickly getting to LLMs you know that can

00:17:58: compete with every single human every single area of knowledge and every single competition

00:18:09: there is to be had in terms of intelligence and knowledge which which is my drawing 26%

00:18:17: does not sound yet like an amazing feat but trust me at the velocity that we are getting

00:18:26: better and better models I'm pretty sure that in two years from now we are getting close to 70 or

00:18:34: 80% and 70 or 80% of that test means that the LLM is capable of answering 70 or 80% of the toughest

00:18:46: questions that humanity can answer or respond to which which is mind-boggling really

00:18:58: okay let's jump to the next thing because open AI is you know like it's like Christmas again

00:19:07: open AI has been releasing so many features recently the other big one is the release of open

00:19:16: AI operator so open AI operator is what open AI called their first agent or the first version of

00:19:29: an agentic AI that they are that they have released what does it mean to be an agent I think in rough

00:19:37: terms the AI committee agrees that being an agentic AI or being an AI agent means that

00:19:45: the AI specifically the LLM gets some agency so it can do things other than just respond to

00:19:56: questions that we ask so for example it can for example call an API that or do something you know

00:20:07: that has the so-called side effects in the real world is pulling in data from the internet kind of

00:20:15: like an agent maybe although I think because most of the LLMs can pull in data and respond based on

00:20:24: that data and we could also add the data in the prompt I think we don't think of web search as being

00:20:32: what defines an agent right it's just additional context but agents are supposed to be someday

00:20:39: pre-autonomous and be able to run by themselves giving the goal specifications that they get

00:20:46: from ideally a human and operator is solving an interesting problem in an interesting way at

00:20:56: least to me personally because open AI has put an or trained actually a different AI model or LLM

00:21:04: is specifically to interact with browsers and the big big goal here on the big picture is that

00:21:12: operator can thinks that a human would do in a browser but not really because it's limited of

00:21:19: course and I think of all the features or all the products I've seen so far coming from open

00:21:25: AI and from other LLM companies operator seems to be like the least closest to production use that

00:21:39: are not to say it feels like an alpha version or maybe even more like a product like a vision

00:21:48: concept something that you would normally publish just to showcase some theoretical

00:21:56: feature that you could build in the future it's and it has been released to the public as a

00:22:03: I'd call a research preview so I guess that kind of explains it but of course when they announced

00:22:08: it they announced it in big words and that mention of being a research preview kind of gets lost in

00:22:16: all the hype probably but maybe it's mere culpa here I should not get hyped as much and read better

00:22:24: what open AI is promising when they release stuff so specifically if you haven't seen it yet

00:22:33: operator is

00:22:40: kind of like what you know already from chat gpt the way you interact with chat gpt but it also

00:22:48: has like a virtual browser and I guess that virtual browser is an actual virtual machine running

00:22:54: somewhere that the AI can interact with from the outside and it gives and it gets also input from

00:23:01: the from what it's seen on the screen so in the presentation video there was this mention of

00:23:09: the AI getting screenshots of what is on the screen and then I think giving the next command and that

00:23:19: command like for example move the mouse here move it there that is something that

00:23:25: that the LLM is able to express with text and then it gets translated into

00:23:31: the command to run through the browser.

00:23:34: So far so good.

00:23:37: I think what's underwhelming is, first of all,

00:23:40: the examples of what you can do with it.

00:23:43: To me personally, having all this tech and this groundbreaking stuff,

00:23:50: and then giving you as an example that you can book a table.

00:23:54: Well, it seems kind of underwhelming.

00:23:57: And the other aspect is that booking a table

00:24:04: through a system that is extremely, extremely slow.

00:24:09: Because it is actually interacting with the website.

00:24:13: Means that it takes very, very long until it gets through all the screens and

00:24:20: it tries and it tries again when it gets an error.

00:24:24: It sometimes gets stuck in the loop of trying the same thing.

00:24:28: Kind of when you use those LLM driven IDEs where they occasionally start

00:24:34: producing a bug and then you try to fix it and they say, yeah, I fixed it.

00:24:38: And then you run it, the same bug occurs and then you just loop 20 times.

00:24:42: That happens to operator as well.

00:24:44: Sometimes it just gets stuck in some place and at some point it just gives up.

00:24:48: Okay, it's early tech, so that could happen, it's a bit complex.

00:24:52: But then I just ask myself, is it conceptually sane?

00:25:03: Like is the idea of what it is doing actually the right way to go?

00:25:08: Because think of it, unless the agent really has lots of data and

00:25:18: permissions from you, will you trust it with your credit card, for example?

00:25:25: Will you give it that information so it can book a flight for you?

00:25:32: Right?

00:25:33: Of course, we can have some guardrails and say, okay,

00:25:38: if the flight is less than 200 bucks, just book it.

00:25:43: If not, don't do it.

00:25:44: But then maybe it books some light with a very shady provider and

00:25:49: you don't get even luggage allowance, even though it thought it had.

00:25:56: Airlines are very complicated, just giving this example.

00:25:59: But in general terms, the user experience with operator right now,

00:26:06: to me personally, feels like I spent more time

00:26:14: waiting or interacting with doing the steps that it cannot do.

00:26:20: Or it's not supposed to do that.

00:26:23: At some point, I'm thinking, why bother?

00:26:27: Why do it this way?

00:26:28: I don't want to be doing it this way.

00:26:31: And it's not really saving me any time, making things much more complicated.

00:26:38: It's low, it doesn't feel correct.

00:26:43: In a way, so I'm probably being a little bit too negative and

00:26:50: don't give it enough credit.

00:26:51: But I think OpenAI has just shown us,

00:26:56: I've seen other stuff coming from OpenAI that has blown my mind.

00:27:02: And then a release like this almost feels like, okay,

00:27:08: was it just trying to compete with some other development on the market,

00:27:13: just to not get left behind?

00:27:17: I don't know.

00:27:17: I just feel that this one should not have been released

00:27:24: because it's not doing OpenAI any good to release products that are not.

00:27:28: It's good, but maybe I'm not seeing the potential right now in it.

00:27:31: I just struggle.

00:27:32: I've not touched operator since I tested it for a couple of days.

00:27:36: And I just basically zero none.

00:27:42: Not a need for using it.

00:27:45: I'm much, much more impressed by the other stuff that OpenAI is doing.

00:27:53: So well, but let's see how it goes.

00:27:55: It's an early preview, maybe stuff changes.

00:27:57: Well, most likely it will happen or it will improve.

00:28:02: I just expect that the concept of how it works gets refined and

00:28:09: maybe re-crammed even because I don't think that in the current version of it,

00:28:14: even if the tech was solved and it was instant, it would still not work exactly

00:28:23: the way I would like it to be or an LLM to serve me.

00:28:27: All right, more announcements and more releases.

00:28:32: Google has been busy working on Gemini, Gemini 2.

00:28:39: Has been released.

00:28:42: I am not as familiar with Gemini 2, to be honest, but

00:28:46: I just wanted to mention it because I tried it in the context of

00:28:52: the AI that is available now with Google Workspaces.

00:28:56: So that is the Google offering for businesses where you get your access to

00:29:02: an email, Google Drive, when everything is connected around teams.

00:29:07: And Gemini 2 seems to be a very solid model so far.

00:29:12: I have seen that it has quite increased reasoning abilities for sure.

00:29:20: It feels much more unnatural and I don't know exactly, but

00:29:25: I think it's also kind of a reasoning model behind there.

00:29:28: So one thing that I like about Gemini a lot and

00:29:33: actually it's not proper Gemini, but the image generation part of it.

00:29:40: So image generation with I think it's called Image in 3.

00:29:44: It's extremely good, really, like if it wants to give you a result,

00:29:51: which does not always want to do for some reasons,

00:29:55: the results are really outstanding.

00:29:58: So you can basically create photo realistic images if you want.

00:30:03: And I've been extremely, extremely amazed with how well it works for

00:30:11: image generation, which is really nice because I think that is one aspect of

00:30:18: this current LLM or AI revolution is that we kind of forgot that images,

00:30:28: like image generation is not a solve problem yet.

00:30:31: We kind of assume it is.

00:30:33: It's not really.

00:30:34: I mean, mid journey is there, it's good.

00:30:37: Although I don't, or I have never understood their policy with just

00:30:44: being available on Discord.

00:30:46: It's just weird.

00:30:49: I think they could be probably 10X the company they are if they had just

00:30:54: access to what they're doing through an API.

00:30:58: But I know maybe it's my ignorance on the decision making process here.

00:31:02: But again, image generation to me is not yet a solve problem.

00:31:13: For example, let's say grok on X.

00:31:16: It is nice because you can generate lots of pictures with it and iterate a bit.

00:31:22: But when you prompt it, you start feeling like, okay, those prompts or

00:31:26: the LLM using the prompt is not the same as how text LLM understands text,

00:31:36: text, which is usually perfect.

00:31:38: The understanding or you know that the LLM is 100% understanding,

00:31:44: which is telling it with those image generation prompts.

00:31:47: I feel like those prompts are not real.

00:31:50: Those prompts seem to be somehow translated into or just sometimes parts of

00:31:56: it are completely ignored, which is what annoys me the most.

00:32:01: For example, you tell us there should be two trees on the picture and

00:32:05: then it just paints one or five.

00:32:08: Happens a lot of times.

00:32:10: I don't know.

00:32:11: I think image creation or generation is not really a solve problem.

00:32:17: You still cannot, in my opinion, do things that you can do with Canva, for example.

00:32:26: So there's still some room for improvement.

00:32:30: But image n3 can generate really good images,

00:32:37: especially humans are photo realistic.

00:32:40: And if you are not being told that these are generated images,

00:32:46: you would have a hard time telling.

00:32:47: Like maybe sometimes you have a very slight faint idea of, okay,

00:32:53: this might be generated because you know that it's generated.

00:32:58: Otherwise, if you see it on a website and you just scroll through,

00:33:02: you would not even notice.

00:33:04: So it's a cool tool to explore and to use definitely.

00:33:09: And Gemini 2, I might still test it a little bit better so

00:33:13: I can give a better update on how it works actually.

00:33:21: And since we were talking about image generation,

00:33:26: I found a tool that got an update recently.

00:33:32: This tool is called ideogram.ai, so ideogram.ai.

00:33:43: It is a very solid image generation tool.

00:33:48: And it got an update recently that allows it to handle text much better.

00:33:54: At least much, much better than your average image generation tool that has been used

00:34:01: in the last days, in the last months even.

00:34:05: So it seems to be able to respect letters and have the letters in one style

00:34:13: and not just randomly shuffle some letters to kind of make a text look like a word,

00:34:22: but not really and only sometimes getting it right and more often than not

00:34:27: just having kind of like this laxic feel to it.

00:34:34: So apparently ideogram, ideogram can solve this problem pretty well.

00:34:42: It's gotten very good results with it.

00:34:44: So if you are interested in exploring an image generation tool that can get out

00:34:50: very good text or use very good text in it, then definitely this is a good tool to use it.

00:34:57: One particular use case that stands out to me is creating cover art for podcasts.

00:35:05: Be it for the main show or for episodes, you can pretty much create a good looking cover art

00:35:18: that contains some text without being ashamed of having some dyslexic text generation there.

00:35:26: So yeah, it's pretty good.

00:35:28: I recommend checking it out and to finish up with the release updates,

00:35:35: there's also Quen 2.5 that has been released by another Chinese contender in the AI space.

00:35:43: I have been using it quiet a lot, especially for image generation,

00:35:51: but most importantly it seems to be trying to compete with DeepSeek and the results are pretty solid.

00:36:02: So I will definitely check out what is happening and keep watching on Blaz news,

00:36:12: but after the first release when I saw the announcement, I went to the website and used it quiet successfully,

00:36:23: and then after a couple of hours maybe it just broke down.

00:36:27: And breaking down seems to be a common issue for certain types of AI products.

00:36:35: DeepSeek moving to one of the last topics now.

00:36:39: DeepSeek had a couple of days or three even of a broken search and actually being broken sometimes even completely.

00:36:50: So it looks like there was too much happening and going on on the model.

00:36:58: So it searched broke first and then the LLM itself would sometimes not respond.

00:37:08: And the issue is it wasn't like a thing of a couple of hours that sometimes happens when a good product gets very hyped

00:37:18: and lots of people get, you know, start using it.

00:37:22: But after a couple of hours, three maybe capacities get to a point where they are enough for the demand.

00:37:29: But in this case we've been seeing like for two or three days that search wasn't working.

00:37:36: And honestly, I stopped using DeepSeek because of that.

00:37:40: And once you get out of the habit, it's difficult to get back.

00:37:44: And then in the meantime, all three mini was released, which for me,

00:37:48: replaced completely my use case that I used to have for DeepSeek R1, which is reasoning and search combined.

00:37:56: Now I just use all three mini.

00:37:58: I have no reason to go back to DeepSeek R1 right now.

00:38:04: Don't get me wrong. I'm very excited about it being an open source model.

00:38:09: And I hope that companies are really going to use the fact that it's open source to create new things that are not possible with closed source alternatives.

00:38:19: Close source, AI is like open AI catch the irony here.

00:38:25: But DeepSeek has been broken for a while as a tool and I think it might have lost a little bit of momentum because they actually ranked very high in app stores.

00:38:42: And probably many people who are not very familiar with AI and LMS, used it as this AI assistant tool.

00:38:53: And because of that, the demand exceeded the available capacities, so it broke down.

00:39:02: And hopefully it will get back soon.

00:39:04: But as I mentioned, I have not used it for quite a while.

00:39:08: And this comes to show a little bit like how these things work.

00:39:13: You know, like a couple of days of your service being down and a competitor releasing just a new model and just a new feature can change the game completely.

00:39:24: And for me, it's easier because I spend lots of time using O1 Pro.

00:39:30: I use now O1 Pro and O3 Mini basically as my two go-to models.

00:39:39: One is the let's dive deep model and the other one is like, let's dive into the topic but get answers quick and search the web at the same time, which is just an incredible capability as well.

00:39:59: All right, and I would like to wrap it up with a little bit of industry discussion I saw on X and it was an interesting topic.

00:40:10: I would say AI agency or maybe human agency boosted by AI.

00:40:19: So there was someone who posted a tweet.

00:40:24: I don't know what you say now that it's called X maybe someone that imagine what would happen if a person could have 10 X the agency that they have right now.

00:40:39: And it was kind of playing with the fact that AI can be that boost for human agency.

00:40:48: And I think some old man answered to this tweet or mentioned it by saying why not 100 X, which of course makes the hype in the AI community go up quite a lot.

00:41:04: But it's I think it's an interesting topic to to be discussed.

00:41:10: Is AI actually having that much of an impact on on us humans on us humans using them are AI powered solutions and tools going to really boost human agency by so much.

00:41:32: Because we oftentimes talk about the risks and losing jobs and everything.

00:41:36: But if it's true that by using and knowing how to use LLMs in AI, a human can get to 10 X or maybe even 100 X of agency.

00:41:46: We're talking about, you know, a whole new game, a whole new dimension of what it means to thrive in in society even in.

00:42:01: And in the end, what it means for us humans in terms of how capable will we become in the future.

00:42:11: And like very close future if we think of the speed of development, like how fast and how it's accelerating even I feel we will soon have to ask ourselves that question.

00:42:27: What is going to happen to the jobs?

00:42:29: That's something everyone is asking no one has convinced me at or given me a convincing answer.

00:42:34: I guess no one has the answer and the answer will only become apparent as we lose jobs and see what happens.

00:42:43: I think it's not a scenario that right now people can really predict.

00:42:47: And but but I think it's positive to see that on the other side, there is the opportunity, the opportunity of reaching out to the stars and say, hey, I can be so much more productive.

00:43:02: I can have so much more impact.

00:43:04: I can build, you know, an empire around myself using the right tools.

00:43:10: Of course, this is still like a future vision because today we still, you know, struggle and do baby steps, but seeing what is possible today that didn't even exist in this form two years ago or 10 years ago.

00:43:27: Right now we are seeing the development and the advent or of a new kind of technology that hasn't been seen before.

00:43:39: And it's to me it's not a discussion about is AI or LLM truly new technology.

00:43:47: No, it is not.

00:43:48: When I was studying like computer science 20 years ago, robotics and AI were already being discussed and taught.

00:43:55: That's not the point.

00:43:58: The point is the fact that this technology in some theoretical way has been around for so many years does not mean that we have been having this technology so vastly available, so widely available and so cheap, even though it's still expensive, but it's getting cheaper by day.

00:44:19: So this is the true revolution that is happening.

00:44:22: And yes, it is a point in time where the game is changing.

00:44:30: And I think the cards are being shuffled anew.

00:44:34: And we will see what happens once we get to the point where all those developments on AI are really going to affect the world around us in a significant way.

00:44:48: I think right now most people are only talking about it.

00:44:51: Some people don't even know yet.

00:44:54: But soon we're going to see it.

00:44:57: But once the disk starts spinning, it's going to be difficult to stop it for a while to just have a look.

00:45:06: So it's better to start exploring now what's possible, especially if you are a tech enthusiast, maybe a tech founder even.

00:45:16: Someone at least with an interest for building things, creating new solutions and exploring what is possible with technology.

00:45:26: Then it's definitely the right time to get interested in AI and exploring.

00:45:31: And as myself, I am doing and for the first time feeling excitement since I think the the internet is like when it appeared for the first time.

00:45:42: And we are talking about a very, very radical change in the next few years.

00:45:49: So I'm here to watch it with my own eyes.

00:45:53: I'm here to be part of the discussion.

00:45:56: I'm here to debate and comment on this.

00:45:59: I would like to give updates on the recent happenings, but also comment on how I feel about it.

00:46:06: So if you like this kind of content, like this kind of stuff, please leave a like, share it with friends or people who might be interested in this kind of stuff.

00:46:18: And if you want to talk to me, feel free to follow me on my ex-pro file, human core AI.

00:46:30: And yeah, let's stay in touch and keep the discussions and this community alive and see how AI is going to affect our lives in the next years.

00:46:46: So thanks a lot and talk to you soon.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.