\u003C/p>\u003Cp>How AI can level the playing field between top performers and less experienced staff\u003C/p>\u003Cp>The potential for massive cost savings and efficiency gains across various industries\u003C/p>\u003Cp>The ethical implications of AI in the workplace - threat or opportunity?\u003C/p>\u003Cp>Real-world implementation strategies and challenges\u003C/p>\u003Cp>\u003Cbr />\u003C/p>\u003Cp>Whether you're a CEO looking to gain a competitive edge, an HR director aiming to optimize your workforce, or simply curious about the future of work, this episode is a must-listen. We'll separate hype from reality and give you actionable insights on how AI might transform your professional life.\u003C/p>\u003Cp>Tune in for a fascinating glimpse into a future where humans and AI work side by side. \u003C/p>\u003Cp>The workplace revolution is here - are you ready?\u003C/p>","episodic","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/images/bb0d16b6-e14e-4b9f-8a31-8f81469302e9.jpg",{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},"storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/images/bb0d16b6-e14e-4b9f-8a31-8f81469302e9_80.jpg","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/images/bb0d16b6-e14e-4b9f-8a31-8f81469302e9_180.jpg","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/images/bb0d16b6-e14e-4b9f-8a31-8f81469302e9_240.jpg","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/images/bb0d16b6-e14e-4b9f-8a31-8f81469302e9_600.jpg","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/images/bb0d16b6-e14e-4b9f-8a31-8f81469302e9_1280.jpg","https://cloud.mave.digital/58641","Sergio Voropaev",false,32,2,{"rate":24,"count":22},5,[26,29,32],{"name":27,"subcategory":28,"is_main":20},"Образование","Самосовершенствование",{"name":30,"subcategory":31,"is_main":20},"Бизнес","Управление",{"name":33,"is_main":34},"Технологии",true,[36],1,"Lets connect","ceo@greatleveler.com",{"facebook":40,"twitter":41,"instagram":40,"telegram":42,"vk":40,"patreon":40,"boosty":40},null,"https://x.com/greatlevelercom","https://t.me/greatlevelercom",{"apple_id":44,"apple":45,"google":40,"spotify":46,"yandex":47,"vk":40,"castbox":48,"soundstream":40,"deezer":49,"overcast":50,"podcastAddict":50,"pocketCasts":50,"youtube":51,"soundcloud":40,"zvuk":50,"youtubeMusic":52,"myBook":40,"litres":53},1774183463,"https://podcasts.apple.com/ru/podcast/ai-synergy/id1774183463","https://open.spotify.com/show/2799vuVV6ZM7ipuxqHsEmM?si=LFkhdF-2QqWpMAE5xAC0FQ&nd=1&dlsi=0518d31c491e497b","https://music.yandex.ru/album/33938902","https://castbox.fm/channel/id6318548?country=ru","https://deezer.com/show/1001326571","","https://www.youtube.com/playlist?list=PLinPRXtk3-haYmjeEt_urdTKOji-r07l5","https://music.youtube.com/playlist?list=PLinPRXtk3-haYmjeEt_urdTKOji-r07l5","https://www.litres.ru/podcast/sergio-voropaev/ai-synergy-71218483/",[55],{"id":56,"podcast_id":7,"name":19,"info":57,"image":58,"createdAt":59,"updatedAt":60,"contact_id":40},"dba1999e-f8b8-4181-9f09-f7bd44a86280","Founder of Great Leveler AI - a platform helping tech leaders boost productivity by 43% through AI implementation. Former Swiss VC mentor, successful founder of multiple tech startups, and expert in AI business integration and scaling.","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/contacts/0361e8d5-08a5-4aab-b563-7c950643919e.jpeg","2024-11-14T10:46:22.583Z","2024-11-14T10:46:22.727Z",{"id":62,"number":63,"season":36,"title":64,"description":65,"type":66,"image":11,"audio":67,"duration":68,"is_explicit":20,"code":63,"publish_date":69,"listenings":70,"is_transcription_hidden":20,"text":71,"is_private":20,"plans":40,"video":40,"images":72,"reactions":73,"chapters":79,"relevantEpisodes":80},"708dbdd4-2e2b-4cd8-8491-f50f73a38dde",11,"The Next 18 Months: Anthropic’s Case for Urgent AI Regulation","Join us on \u003Cb>The Next 18 Months\u003C/b>, where we dive deep into \u003Cb>\u003Ca href=\"https://\">Anthropic\u003C/a>\u003C/b>'s compelling vision for the future of artificial intelligence and the crucial role that regulation will play in shaping it. Discover why experts believe we have just 18 months to get crucial safety measures in place. With AI's capabilities advancing at breakneck speed, Anthropic’s latest report warns that time is running out to establish guidelines that protect society without stifling innovation. In this episode, we explore why experts say the next 18 months could make or break AI’s future and discuss the steps Anthropic believes are necessary to responsibly harness this transformative technology. Tune in as we dissect the risks, the revolutionary potential, and the pressing need for policies that balance safety with progress.","full","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/708dbdd4-2e2b-4cd8-8491-f50f73a38dde.mp3",1086,"2024-11-07T08:01:24.402Z",21,"Speaker 02 00:00:00\n\nHey, everyone, and welcome back to the Deep Dive. Today, we are diving into something really important, the future of AI and why everybody is talking about the need for regulation all of a sudden. So we've got some excerpts here from a document that lays out some pretty serious concerns about how fast AI is developing and what we need to do about it.\n\nSpeaker 00 00:00:23\n\nYeah. You know, it's interesting because even the experts in the field are saying we need to act now. They specifically say the next 18 months are really make or break for getting policies in place.\n\nSpeaker 02 00:00:35\n\nWhoa, that's not a lot of time to get everything in order. But what's got everybody so spooked? Is it like killer robots or something?\n\nSpeaker 00 00:00:41\n\nNot quite killer robots, but the document does paint a picture of what could happen if we don't put some safeguards in place. And it's not just about stopping bad things from happening. It's also about making sure we can still benefit from all the good that AI can do.\n\nSpeaker 02 00:00:53\n\nSo it's like a balancing act. We need to control the risks, but not like squash all the potential that AI has.\n\nSpeaker 00 00:00:59\n\nExactly. The document argues that if we get well-designed regulations in place now, that could be the key to unlocking AI's potential while keeping those risks in check. But if we wait too long, we might end up with regulations that are rushed and ineffective, and they could even end up stifling innovation.\n\nSpeaker 02 00:01:18\n\nSo it's like strike while the iron's hot, get those regulations in place before things get out of hand. But what kind of risks are we talking about\n\nSpeaker 00 00:01:26\n\nspecifically? The document doesn't just like speak in these vague terms, No, it actually gets into specifics, which I think is what makes it so compelling. For example, it highlights how AI is getting incredibly good at coding. Yeah, you might be surprised by just how fast AI is actually getting better at things like writing code. The document gives an example of something called the SW bench, which is basically a test to see how well AI can solve like real world coding problems.\n\nSpeaker 02 00:01:52\n\nSo it's like they're giving AI a coding exam.\n\nSpeaker 00 00:01:55\n\nExactly. And the result's pretty amazing. In a little over a year, AI's performance on this benchmark has jumped from solving like a tiny 1.96% of the problems to a whopping 49% Wait, almost 50%.\n\nSpeaker 02 00:02:07\n\nThat's insane. They're really that good at coding already.\n\nSpeaker 00 00:02:10\n\nIt's pretty remarkable how quickly it's improving. And the document even names specific AI models and how they've evolved, like Claude 2, Devin, and Claude 3.5. It's really like watching them level up in real time. And that rapid development is exactly why the document is stressing we need to act now.\n\nSpeaker 02 00:02:31\n\nOkay, I'm starting to see why everyone's so concerned. So coding is one thing. What else is AI getting good at that's causing all this concern?\n\nSpeaker 00 00:02:37\n\nWell, cybersecurity is a big one. The document cites internal findings that show that AI models can already assist in like cyber offense tasks.\n\nSpeaker 02 00:02:44\n\nWhoa. So they're not just writing code. They're potentially using it for like hacking and stuff. That's a little unsettling.\n\nSpeaker 00 00:02:49\n\nYeah, it is concerning. What's particularly concerning is that they predict that future AI models, the ones with more advanced planning abilities, could be even more effective in this area. And it's not even just about like stealing data or disrupting systems. It's about the possibility of these large scale attacks that could cripple critical infrastructure or even cause like physical harm.\n\nSpeaker 02 00:03:10\n\nOkay, now I'm officially freaked out. But it sounds like this document isn't just trying to like fearmonger. They're proposing solutions too, right? I keep seeing this responsible scaling policy thing pop up. What's the deal with that?\n\nSpeaker 00 00:03:23\n\nSo essentially, a responsible scaling policy, or RSP, is a proactive way to manage AI risks. The document talks about how Anthropic, which is an AI company, has implemented its own RSP and what it's learned from that experience.\n\nSpeaker 02 00:03:39\n\nSo it's like a set of rules for developing AI responsibly, sort of like a code of ethics.\n\nSpeaker 00 00:03:44\n\nYeah, that's a good way to put it. It's about building safety into the development process from the very beginning, rather than trying to just tack it on as an afterthought. The document outlines two core principles of Anthropix RSP, proportionality and iterative development.\n\nSpeaker 02 00:03:57\n\nOkay, break those down for me. Proportionality and what was the other one?\n\nSpeaker 00 00:04:01\n\nProportionality means that the safety and security measures should scale up as AI capabilities reach certain like thresholds. Think of it like levels in a video game. As the AI gets more powerful, you unlock new safety protocols to match the increased risk.\n\nSpeaker 02 00:04:14\n\nSo it's not a one size fits all approach. You adjust the safety measures based on how advanced the AI is.\n\nSpeaker 00 00:04:20\n\nExactly. And the document highlights the different levels of AI capability are going to require different levels of oversight and control. And then there's the iterative part, which means the policy is constantly being reevaluated and adjusted as AI technology keeps evolving. It's not a static set of rules. It's a dynamic system that adapts to new challenges and discoveries.\n\nSpeaker 02 00:04:42\n\nThat makes a lot of sense. It's like you're constantly learning and improving. just like the AI itself.\n\nSpeaker 00 00:04:47\n\nYeah, and that's a key point. The document emphasizes that we can't just set some rules and then forget about it. We need to be constantly monitoring. and adjusting our approach to AI safety as the technology continues to change.\n\nSpeaker 02 00:05:00\n\nOkay, so these RSPs seem like a good place to start, but are they enough on their own?\n\nSpeaker 00 00:05:05\n\nThat's the million dollar question, isn't it? The document acknowledges that while RSPs are a valuable tool, they're not a replacement for actual regulation.\n\nSpeaker 02 00:05:13\n\nSo they're like a stepping stone. something more comprehensive.\n\nSpeaker 00 00:05:16\n\nExactly. The document argues that RSPs can provide a solid foundation and even like a blueprint for what effective AI regulation could look like.\n\nSpeaker 02 00:05:26\n\nAll right, so let's talk regulation then. What would that look like according to this document? So we're talking like government regulations, laws, that sort of thing. What would that actually look like?\n\nSpeaker 00 00:05:35\n\nWell, the document lays out three key elements for what it considers to be effective AI regulation, transparency, incentivizing safety and security, and simplicity and focus.\n\nSpeaker 02 00:05:47\n\nOK, let's break those down. We talked about transparency, meaning that companies would need to be upfront about their safety measures and the risks that are involved. But how do we actually incentivize them to prioritize safety? It can't all just be about good intentions, can it?\n\nSpeaker 00 00:06:01\n\nNo, definitely not. Good intentions aren't enough. The document suggests a couple of interesting mechanisms. One idea is for regulators to clearly define the specific threats that those companies responsible scaling policies need to address.\n\nSpeaker 02 00:06:15\n\nSo it's like giving them a checklist of potential problems and saying, okay, show us how you're going to deal with these.\n\nSpeaker 00 00:06:20\n\nExactly. It sets a clear baseline for safety without being overly prescriptive about how those companies should achieve it. It encourages them to come up with creative solutions within a framework of safety.\n\nSpeaker 02 00:06:31\n\nI like that. It still allows for innovation, but with some guardrails in place. What about this second element then? Incentivizing safety. Are we talking about like gold stars for good behavior?\n\nSpeaker 00 00:06:41\n\nWell, maybe not gold stars, but the document suggests some concrete incentives. One idea is like a tiered system where companies with more robust safety policies get certain benefits or recognition.\n\nSpeaker 02 00:06:52\n\nAh, so like a gold standard for AI safety that companies can try to achieve. I like that. It creates a clear target and encourages that race to the top that the document mentioned.\n\nSpeaker 00 00:07:02\n\nExactly. And the goal here is to create a system that rewards responsible behavior and incentivizes companies to actually go above and beyond just the bare minimum. We want them to see safety as competitive advantage, not just a box to check.\n\nSpeaker 02 00:07:16\n\nThat makes a lot of sense. What about the third element then? Simplicity and focus. Why is that so important when it comes to AI regulation?\n\nSpeaker 00 00:07:23\n\nWell, imagine a world where the regulations are so complicated and so convoluted that even the experts can't understand them.\n\nSpeaker 02 00:07:30\n\nOh, yeah, that sounds like a recipe for disaster. It would be a bureaucratic nightmare, probably wouldn't be very effective, and it could even stifle innovation if everyone is too busy trying to decipher the rules.\n\nSpeaker 00 00:07:41\n\nExactly. And the document really stresses this point. AI regulations need to be clear, concise, and easy to understand. They need to focus on the most critical risks without getting bogged down in all this unnecessary bureaucracy.\n\nSpeaker 02 00:07:56\n\nIt's like that saying, keep it simple stupid. Sometimes the most straightforward solutions are the best.\n\nSpeaker 00 00:08:01\n\nAbsolutely. And in this case, simplicity also means more transparency and accountability. When the regulations are easy to understand, it's easier for the public to hold companies and policymakers accountable for actually following them.\n\nSpeaker 02 00:08:13\n\nSo it sounds like this document is advocating for a very balanced approach to regulation, then one that prioritizes safety without stifling innovation.\n\nSpeaker 00 00:08:22\n\nYeah. And they emphasize that there isn't necessarily one right way to achieve that balance. The key is going to be finding a proposal that a wide range of stakeholders can all agree on. It's going to require collaboration and compromise from everyone.\n\nSpeaker 02 00:08:37\n\nOK, so back to this ticking clock then. This document keeps hammering home this urgency of the situation. They're saying the next 18 months are critical. So what needs to happen in that time frame to make sure we get this right?\n\nSpeaker 00 00:08:49\n\nWell, the document is calling for a really coordinated effort between policymakers, the AI industry safety advocate civil society lawmakers. They all need to come together and work together to develop a regulatory framework that actually addresses those real risks of AI without hindering its potential benefits.\n\nSpeaker 02 00:09:06\n\nThat sounds like a tall order, getting all those different groups to agree on something as complex as AI regulation. That's like herding cats.\n\nSpeaker 00 00:09:13\n\nYeah, it's definitely a challenge. But the document argues that it's a challenge we absolutely must overcome. The stakes are too high to ignore. And it's not just a national issue either. The document also stresses the importance of international cooperation and information sharing.\n\nSpeaker 02 00:09:28\n\nRight, because AI doesn't respect borders. If one country develops a dangerous AI system, it could potentially affect the entire world.\n\nSpeaker 00 00:09:36\n\nExactly. So we need to be working together to find global solutions to this global challenge. And while the document focuses mainly on the US context, they are encouraging similar regulatory efforts in other countries that are starting to grapple with these same issues.\n\nSpeaker 02 00:09:52\n\nSo it's clear this isn't just some abstract futuristic problem. The risks posed by AI are very real, and they're happening right now.\n\nSpeaker 00 00:09:59\n\nThe document makes that very clear. They give real world examples of how AI is already being used in ways that could have some pretty serious negative consequences, particularly when it comes to things like cybersecurity and those CBRN scenarios. Remember how we talked about AI getting really good at coding? Well, that skill could very easily be exploited for malicious purposes.\n\nSpeaker 02 00:10:18\n\nRight. Like using A.I. to create incredibly sophisticated malware or to launch these large scale cyber attacks and their growing knowledge of biology and chemistry is equally concerning, especially when you consider the potential there for the development of bioweapons or other really dangerous substances.\n\nSpeaker 00 00:10:37\n\nYeah, it paints a pretty sobering picture. It's like we're standing at a crossroads. We have this incredibly powerful technology that has the potential to do some really amazing things, but it also has the potential to cause immense harm if it's not developed and used responsibly.\n\nSpeaker 02 00:10:51\n\nSo it's a bit of a tightrope walk then, trying to harness the power of AI, but also keep it under control. So how do we translate this urgency into concrete action then? What can we as individuals do to make a difference in this conversation?\n\nSpeaker 00 00:11:04\n\nThat's a great question. And it's one that the document kind of indirectly addresses, I think, first and foremost. We need to stay informed, read articles, listen to podcasts like this one, and really educate ourselves about the potential benefits and risks of AI.\n\nSpeaker 02 00:11:16\n\nKnowledge is power, right? The more we understand about AI, the better equipped we'll be to advocate for its responsible development and use.\n\nSpeaker 00 00:11:25\n\nAbsolutely. And speaking of advocating, don't be afraid to make your voice heard. Contact your elected officials, participate in public forums and really engage in discussions. about AI ethics and regulation.\n\nSpeaker 02 00:11:40\n\nSo it's not just about passively consuming information. It's about actively shaping the conversation.\n\nSpeaker 00 00:11:45\n\nExactly. We all have a stake in the future of AI. And we need to make sure our voices are heard. The future of AI is not predetermined. It's something that we are all shaping together.\n\nSpeaker 02 00:11:55\n\nWell said. This has been a really thought provoking discussion and you've done a fantastic job breaking down these really complex issues in a way that's easy to understand.\n\nSpeaker 00 00:12:03\n\nIt's been my pleasure. I think it's so crucial to have these conversations and to try to demystify the world of AI a little bit. It's not just a topic for tech experts. It's something that affects all of us.\n\nSpeaker 02 00:12:14\n\nAnd on that note, let's shift gears a little bit and explore some of those potential solutions in more detail. What would a future with responsible AI regulation actually look like? Okay, so let's get down to like the nuts and bolts then. What would a world with responsible AI regulation actually look like? I mean, it's all well and good to talk about this transparency and these incentives, but how do those ideas translate into like real world policies?\n\nSpeaker 00 00:12:42\n\nWell, that's the big question, isn't it? And the document does offer some concrete suggestions. For one, when it comes to transparency, it's not enough for companies to just say, oh, yeah, we're being responsible. There has to be a way to actually verify that they're following through on those promises.\n\nSpeaker 02 00:12:56\n\nSo it's not just like taking their word for it. There needs to be some kind of oversight.\n\nSpeaker 00 00:13:00\n\nExactly. The document talks about the need for things like independent audits or like third party reviews to make sure companies are actually putting those responsible scaling policies into practice. It could even involve like some level of government oversight, similar to how the FDA regulates food and drugs to make sure they're safe.\n\nSpeaker 02 00:13:17\n\nOkay, so it's kind of like having a referee to make sure everyone is playing by the rules. That makes sense.\n\nSpeaker 00 00:13:23\n\nYeah, and that kind of transparency not only helps to reduce the risks, but it also builds trust. When the public can see that companies are being held accountable, they're more likely to actually embrace AI and see its potential benefits.\n\nSpeaker 02 00:13:37\n\ntransparency builds trust. I like that. But what about the incentives? How do we actually motivate companies to prioritize safety beyond just avoiding getting penalties?\n\nSpeaker 00 00:13:47\n\nWell, as we mentioned before, there's this idea of creating a tiered system where companies with more robust safety policies get certain perks.\n\nSpeaker 02 00:13:55\n\nRight, like that gold standard for AI safety. But what would those perks actually look like? Are we talking like tax breaks, government contracts, bragging rights?\n\nSpeaker 00 00:14:05\n\nAll of those are possibilities. The document suggests that governments could offer financial incentives, like tax breaks or grants, to companies that show a real commitment to safety.\n\nSpeaker 02 00:14:14\n\nSo basically rewarding them for doing the right thing. That makes sense. What other kind of incentives are mentioned?\n\nSpeaker 00 00:14:19\n\nAnother idea is to create some kind of public recognition system, like a certification program where companies that meet certain safety standards can display like a seal of approval or something like that.\n\nSpeaker 02 00:14:34\n\nSo it's kind of like a good housekeeping seal of approval. But for AI, I like that. It's a way for companies to signal, hey, we're committed to safety and maybe even attract more customers or investors.\n\nSpeaker 00 00:14:44\n\nExactly. It's all about making sure that doing the right thing isn't just ethically sound, but it actually makes good business sense.\n\nSpeaker 02 00:14:51\n\nIt's like safety cells. But incentives aside, what about the actual details of the regulations themselves? How do we make sure those are clear and concise and easy to follow, like the document is recommending?\n\nSpeaker 00 00:15:01\n\nWell, that's where this principle of simplicity and focus comes in. The goal is to have regulations that are very straightforward, easy for everyone to understand, from the AI developers to the policymakers to the average person on the street.\n\nSpeaker 02 00:15:13\n\nBecause if the rules are too complicated, no one is going to follow them. It's just going to create confusion and loopholes for companies to take advantage of.\n\nSpeaker 00 00:15:22\n\nExactly. The document emphasizes that AI regulations should be laser focused on the most critical risks. We don't want to get bogged down in all this unnecessary bureaucracy or technical jargon.\n\nSpeaker 02 00:15:35\n\nSo keep it simple, keep it focused, and keep it effective. That sounds like a good mantra for any kind of regulation, not just AI.\n\nSpeaker 00 00:15:43\n\nRight. And just as importantly, they need to be adaptable. The world of AI is constantly changing, so we have to make sure the regulations can actually keep pace with those changes.\n\nSpeaker 02 00:15:51\n\nSo we're not just setting a bunch of rules in stone and calling it a day. We need to be constantly evaluating and adjusting as these new challenges come up.\n\nSpeaker 00 00:15:58\n\nYeah, exactly. This isn't a one and done kind of situation. It's a process that requires constant dialogue and collaboration between everyone\n\nSpeaker 02 00:16:06\n\ninvolved. Policymakers, industry experts, researchers, and the public It sounds like this document is advocating for a very dynamic approach then to AI regulation, one that's constantly evolving and adapting to this ever-changing landscape I think that's a great way to put it.\n\nSpeaker 00 00:16:21\n\nIt's not about trying to predict every possible risk or every possible scenario, but it's about creating a flexible framework that can respond to these new challenges as they emerge.\n\nSpeaker 01 00:16:32\n\nSo it's less about specific rules and more about setting these core principles and guidelines.\n\nSpeaker 00 00:16:37\n\nExactly. It's about setting a clear direction for how AI should be developed and used while still allowing for that flexibility and innovation. And remember that ticking clock we keep talking about. What's a reminder? That we don't really have time to waste. We need to start having these conversations now and working together to find those solutions that can balance the progress with the need for safety.\n\nSpeaker 02 00:17:00\n\nWell, I think this deep dive has given our listeners a lot to think about. We've explored the potential of AI, but we've also talked about some very real risks that come along with it. And we've discussed some of the key steps we can take to try to make sure that AI is developed and used responsibly.\n\nSpeaker 00 00:17:13\n\nIt's a complex issue and there are no easy answers. But by engaging in these conversations, staying informed and making our voices heard, we can all help to shape the future of AI in a way that actually benefits all of humanity.\n\nSpeaker 02 00:17:29\n\nThat's a great point to end on. Thanks for joining us today for this deep dive into the world of AI and its implications for our future. We hope you found this conversation both informative and thought-provoking.\n\nSpeaker 00 00:17:40\n\nIt's been a pleasure to talk about this important topic with you. And we encourage all of our listeners to keep learning and engaging with this conversation about AI, because ultimately the future is in our hands. And it's up to all of us to make sure that AI is a force for good in the world.\n\nSpeaker 02 00:17:54\n\nAnd for those of you who want to delve even deeper into this topic, we've included links to all the sources that we've mentioned down in the show notes, so be sure to check those out. Until next time, stay curious, stay engaged, and stay informed.",{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},[74,77],{"type":75,"count":76},"like","0",{"type":78,"count":76},"dislike",[],[81,91,101,110,119,128,137,146,156],{"id":82,"number":83,"season":36,"title":84,"description":85,"type":66,"image":11,"audio":86,"duration":87,"is_explicit":20,"code":83,"publish_date":88,"listenings":89,"is_private":20,"plans":40,"video":40,"images":90},"b6025c83-32ab-4144-8890-1e1c8256a0e9",10,"Building the Future of Gaming - AI Next-frame prediction.","Join us as we explore the mind-bending world of \u003Cb>AI-powered gaming\u003C/b>, where \u003Cb>next frame prediction technolog\u003C/b>y is revolutionizing how we interact with virtual worlds. We dive deep into groundbreaking projects from\u003Cb> Descartes and Etche\u003C/b>d, including an \u003Cb>AI version of Minecraft\u003C/b> that responds to players' imagination in real-time. Our expert guest breaks down the technology behind these innovations, from the specialized \u003Cb>Sohu chip\u003C/b> to the broader implications for education, healthcare, and creative expression. Discover how AI isn't just changing how we play games – it's reshaping how we interact with technology itself.","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/b6025c83-32ab-4144-8890-1e1c8256a0e9.mp3",893,"2024-11-02T11:22:32.534Z",15,{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":92,"number":93,"season":36,"title":94,"description":95,"type":66,"image":11,"audio":96,"duration":97,"is_explicit":20,"code":93,"publish_date":98,"listenings":99,"is_private":20,"plans":40,"video":40,"images":100},"da0fa61b-1d59-47f4-83d0-7aa1f0e4edcc",9,"The Search Wars: ChatGPT's New Web Powers vs Google & Perplexity","In today’s episode, we're diving into the evolving world of search engines and how groundbreaking upgrades to \u003Cb>\u003Ca href=\"https://\">ChatGPT\u003C/a>\u003C/b>'s search capabilities could be changing the game. Imagine asking a question and getting a direct, sourced answer instead of endless scrolling. We'll explore the magic behind ChatGPT's new real-time web access, how it stacks up against \u003Cb>\u003Ca href=\"https://\">Google Search\u003C/a>\u003C/b> and \u003Cb>\u003Ca href=\"https://\">Perplexity\u003C/a>\u003C/b>, and why this tech revolution might reshape how we explore, learn, and connect with information. From travel tips to stock updates, join us as we break down this “search revolution”—and debate who might come out on top!","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/da0fa61b-1d59-47f4-83d0-7aa1f0e4edcc.mp3",564,"2024-10-31T17:34:05.300Z",18,{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":102,"number":103,"season":36,"title":104,"description":105,"type":66,"image":11,"audio":106,"duration":107,"is_explicit":20,"code":103,"publish_date":108,"listenings":99,"is_private":20,"plans":40,"video":40,"images":109},"5e9f1099-955a-476a-83da-d29d29b9062a",8,"AI Mediator: How Google DeepMind’s Habermas Could Transform Conflict Resolution","Imagine a world where AI doesn’t just mediate disagreements but actively helps prevent conflicts from escalating, both in person and online. In this episode, we explore Google DeepMind’s latest breakthrough, Habermas (\u003Ca href=\"https://\">Habermas Machine dataset\u003C/a>)—a powerful AI designed to resolve disputes by finding genuine common ground among diverse viewpoints. Joined by an expert guest, we’ll dive into how this technology works, the promising research behind it, and the vast implications for promoting peace and understanding.","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/5e9f1099-955a-476a-83da-d29d29b9062a.mp3",774,"2024-10-30T08:51:40.551Z",{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":111,"number":112,"season":36,"title":113,"description":114,"type":66,"image":11,"audio":115,"duration":116,"is_explicit":20,"code":112,"publish_date":117,"listenings":89,"is_private":20,"plans":40,"video":40,"images":118},"d7427338-815f-469d-b655-9cf0acf5747f",7,"The Centaur Conundrum. Cognitive AI model.","Dive into the fascinating world of \u003Cb>\u003Ca href=\"https://huggingface.co/marcelbinz/Llama-3.1-Centaur-70B\">Centaur\u003C/a>\u003C/b>, an ambitious AI model. \u003Cp>Explore how this groundbreaking technology is blurring the lines between artificial intelligence and cognitive science, and uncover the incredible potential it holds for unlocking the secrets of human behavior and cognition.\u003C/p>","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/d7427338-815f-469d-b655-9cf0acf5747f.mp3",1598,"2024-10-29T14:14:44.823Z",{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":120,"number":121,"season":36,"title":122,"description":123,"type":66,"image":11,"audio":124,"duration":125,"is_explicit":20,"code":121,"publish_date":126,"listenings":70,"is_private":20,"plans":40,"video":40,"images":127},"54fcc72a-a8dc-468b-aeee-88e169dae27c",6,"Digital Minds at Work: The Revolution of Large Action Models","In this episode, we dive deep into the groundbreaking world of Large Action Models (LAMs), with a special focus on Anthropic's Claude 3.5 Haiku. We'll explore how this lightning-fast AI isn't just chatting anymore – it's actively using computers like a human would, opening files, navigating websites, and handling complex digital tasks through innovative pixel-based interaction. \u003Cp>\u003Cbr />\u003C/p>\u003Cp>Keywords: AI technology, Large Action Models, Anthropic, Claude 3.5 Haiku, computer automation, future of work, AI innovation\u003C/p>","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/54fcc72a-a8dc-468b-aeee-88e169dae27c.mp3",182,"2024-10-22T16:28:01.262Z",{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":129,"number":24,"season":36,"title":130,"description":131,"type":66,"image":11,"audio":132,"duration":133,"is_explicit":20,"code":24,"publish_date":134,"listenings":135,"is_private":20,"plans":40,"video":40,"images":136},"a265ee3a-0825-4a6c-a7a0-89fba9eb267f","AI Dream Teams: How Multi-Agent Platforms Are Revolutionizing Business","Dive into the world of \u003Cb>Asilisc Scope\u003C/b> and multi-agent \u003Cb>AI platforms\u003C/b> that are transforming how businesses operate. Discover how interconnected\u003Cb> AI specialists\u003C/b> can streamline your company's workflow - from accounting to customer service and beyond. Learn how these \u003Cb>AI teams\u003C/b> collaborate under human supervision to tackle complex problems, potentially boosting efficiency and innovation across your entire organization. Join us as we explore the future of AI in business, where your next star employee might just be a team of artificial intelligences.","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/a265ee3a-0825-4a6c-a7a0-89fba9eb267f.mp3",494,"2024-10-20T12:58:43.907Z",22,{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":138,"number":139,"season":36,"title":140,"description":141,"type":66,"image":11,"audio":142,"duration":143,"is_explicit":20,"code":139,"publish_date":144,"listenings":70,"is_private":20,"plans":40,"video":40,"images":145},"05137fef-b1e0-4222-9de1-844a074a8e08",4,"The Action AI Revolution: How Large Action Models Are","Explore the game-changing world of Large Action Models - AI that doesn't just advise, but acts. Learn how this cutting-edge technology is dramatically accelerating productivity by automating tasks across various business software platforms. We'll dive into the potential benefits, challenges, and ethical considerations of AI that works alongside humans, potentially reshaping the future of work as we know it.","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/05137fef-b1e0-4222-9de1-844a074a8e08.mp3",225,"2024-10-20T12:47:01.424Z",{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":147,"number":148,"season":36,"title":149,"description":150,"type":66,"image":11,"audio":151,"duration":152,"is_explicit":20,"code":148,"publish_date":153,"listenings":154,"is_private":20,"plans":40,"video":40,"images":155},"aac64a7b-1308-4487-b065-4e2b19b41f00",3,"Colossus Chronicles: Musk, Grok-3, and the Future of AI","This episode explores the creation of \u003Cb>Colossus\u003C/b>, a supercomputer of unprecedented power, built to train the next-generation AI model, \u003Cb>Grok 3.\u003C/b> We'll delve into the astonishing specs of this machine. \u003Cp>\u003Cbr />\u003C/p>\u003Cp>Buckle up for a journey into the cutting edge of AI, where the future is being written at warp speed in a repurposed factory in \u003Cb>Memphis, Tennessee.\u003C/b>\u003C/p>","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/aac64a7b-1308-4487-b065-4e2b19b41f00.mp3",297,"2024-10-17T05:57:37.589Z",37,{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":157,"number":22,"season":36,"title":158,"description":159,"type":66,"image":11,"audio":160,"duration":161,"is_explicit":20,"code":22,"publish_date":162,"listenings":163,"is_private":20,"plans":40,"video":40,"images":164},"ca3dcdba-512d-46db-a902-4fa1d5a54664","Watts Up With AI: Powering the Digital Brain","\u003Cb>\"Watts Up With AI\"\u003C/b> dives deep into the rarely discussed but critically important topic of AI's massive energy consumption. As we marvel at AI's capabilities in generating images and engaging in conversations, this podcast uncovers the hidden giant powering it all: the enormous energy appetite of AI systems.\u003Cp>\u003Cbr />\u003C/p>\u003Cp>The podcast delves into the challenges of sustainably powering the AI revolution, discussing innovative solutions like \u003Cb>Google\u003C/b>'s exploration of small \u003Cb>nuclear reactors\u003C/b> for data centers.\u003C/p>","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/ca3dcdba-512d-46db-a902-4fa1d5a54664.mp3",210,"2024-10-16T13:51:51.794Z",20,{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},["Reactive",166],{"$ssite-config":167},{"_priority":168,"env":172,"name":173,"url":174},{"name":169,"env":170,"url":171},-10,-15,-4,"production","podcast-website","https://greatleveler.mave.digital/",["Set"],["ShallowReactive",177],{"$63LOZx6kQb":-1},"/ep-11",{"common":180},{"activeTab":181,"isShareActive":20,"episodes":182,"contentPosition":20,"podcast":5,"podcastSlug":183,"showPlayer":20,"activeTrack":40,"pauseTrack":20,"activeEpisode":61,"titleHeight":184,"website":185,"listenUrl":40,"isMobileShareActive":20,"isDataLoaded":34,"favicon":50,"customDomain":40,"episodesCount":184},"listen",[],"greatleveler",0,{"button_text":37,"button_link":38,"is_indexing":34,"ym_id":-1,"gtm_id":-1}]