\u003C/p>\u003Cp>How AI can level the playing field between top performers and less experienced staff\u003C/p>\u003Cp>The potential for massive cost savings and efficiency gains across various industries\u003C/p>\u003Cp>The ethical implications of AI in the workplace - threat or opportunity?\u003C/p>\u003Cp>Real-world implementation strategies and challenges\u003C/p>\u003Cp>\u003Cbr />\u003C/p>\u003Cp>Whether you're a CEO looking to gain a competitive edge, an HR director aiming to optimize your workforce, or simply curious about the future of work, this episode is a must-listen. We'll separate hype from reality and give you actionable insights on how AI might transform your professional life.\u003C/p>\u003Cp>Tune in for a fascinating glimpse into a future where humans and AI work side by side. \u003C/p>\u003Cp>The workplace revolution is here - are you ready?\u003C/p>","episodic","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/images/bb0d16b6-e14e-4b9f-8a31-8f81469302e9.jpg",{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},"storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/images/bb0d16b6-e14e-4b9f-8a31-8f81469302e9_80.jpg","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/images/bb0d16b6-e14e-4b9f-8a31-8f81469302e9_180.jpg","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/images/bb0d16b6-e14e-4b9f-8a31-8f81469302e9_240.jpg","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/images/bb0d16b6-e14e-4b9f-8a31-8f81469302e9_600.jpg","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/images/bb0d16b6-e14e-4b9f-8a31-8f81469302e9_1280.jpg","https://cloud.mave.digital/58641","Sergio Voropaev",false,32,2,{"rate":24,"count":22},5,[26,29,32],{"name":27,"subcategory":28,"is_main":20},"Образование","Самосовершенствование",{"name":30,"subcategory":31,"is_main":20},"Бизнес","Управление",{"name":33,"is_main":34},"Технологии",true,[36],1,"Lets connect","ceo@greatleveler.com",{"facebook":40,"twitter":41,"instagram":40,"telegram":42,"vk":40,"patreon":40,"boosty":40},null,"https://x.com/greatlevelercom","https://t.me/greatlevelercom",{"apple_id":44,"apple":45,"google":40,"spotify":46,"yandex":47,"vk":40,"castbox":48,"soundstream":40,"deezer":49,"overcast":50,"podcastAddict":50,"pocketCasts":50,"youtube":51,"soundcloud":40,"zvuk":50,"youtubeMusic":52,"myBook":40,"litres":53},1774183463,"https://podcasts.apple.com/ru/podcast/ai-synergy/id1774183463","https://open.spotify.com/show/2799vuVV6ZM7ipuxqHsEmM?si=LFkhdF-2QqWpMAE5xAC0FQ&nd=1&dlsi=0518d31c491e497b","https://music.yandex.ru/album/33938902","https://castbox.fm/channel/id6318548?country=ru","https://deezer.com/show/1001326571","","https://www.youtube.com/playlist?list=PLinPRXtk3-haYmjeEt_urdTKOji-r07l5","https://music.youtube.com/playlist?list=PLinPRXtk3-haYmjeEt_urdTKOji-r07l5","https://www.litres.ru/podcast/sergio-voropaev/ai-synergy-71218483/",[55],{"id":56,"podcast_id":7,"name":19,"info":57,"image":58,"createdAt":59,"updatedAt":60,"contact_id":40},"dba1999e-f8b8-4181-9f09-f7bd44a86280","Founder of Great Leveler AI - a platform helping tech leaders boost productivity by 43% through AI implementation. Former Swiss VC mentor, successful founder of multiple tech startups, and expert in AI business integration and scaling.","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/contacts/0361e8d5-08a5-4aab-b563-7c950643919e.jpeg","2024-11-14T10:46:22.583Z","2024-11-14T10:46:22.727Z",{"id":62,"number":63,"season":36,"title":64,"description":65,"type":66,"image":11,"audio":67,"duration":68,"is_explicit":20,"code":63,"publish_date":69,"listenings":70,"is_transcription_hidden":20,"text":71,"is_private":20,"plans":40,"video":40,"images":72,"reactions":73,"chapters":79,"relevantEpisodes":80},"fed0ca8e-010f-4187-9562-1dbbfdd45a95",16,"Chain of Thought: How AI Gets Smarter Through Error-Aware Training","Dive into groundbreaking research that's revolutionizing how \u003Cb>AI\u003C/b> learns to think. Discover why teaching language models to recognize and learn from their mistakes leads to more robust and reliable performance. From \u003Cb>GPT-4 to Gemini Pro\u003C/b>, see how error-aware training is pushing the boundaries of artificial intelligence and challenging our traditional approaches to learning.\u003Cp>\u003Cbr />\u003C/p>\u003Cp>\u003Cb>Episode Highlights:\u003C/b>\u003C/p>\u003Cp>\u003Cb>Chain of Thought (CoT)\u003C/b> Prompting: Stepwise vs. Coherent Approaches\u003C/p>\u003Cp>The Power of Error-Aware Demonstrations in AI Learning\u003C/p>\u003Cp>Why Mistakes in the Middle Matter More Than Final Answers\u003C/p>\u003Cp>How AI Models Learn from Their Own Errors\u003C/p>\u003Cp>Revolutionary Results:\u003Cb> 5%\u003C/b>+ Accuracy Improvements Across Major LLMs\u003C/p>","full","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/fed0ca8e-010f-4187-9562-1dbbfdd45a95.mp3",910,"2024-11-15T10:07:11.776Z",25,"Speaker 0 00:00:00\n\nIt's really incredible how much progress we're seeing with these large language models. They're not just, you know, chatting anymore. They're able to, like, solve these complex problems. And one of the things that's really pushing the boundaries, I think, is this idea of chain of thought prompting. Cut T. Or Cut T, for short. Yeah. And could you just give us a quick overview of what that is?\n\nSpeaker 1 00:00:22\n\nYeah. So essentially, Cut T prompting is we're giving the model a few examples of how to reason step by step. Right. And then it learns to tackle new problems in the same way.\n\nSpeaker 0 00:00:33\n\nSo it's like we're giving it a roadmap. Exactly. Yeah.\n\nSpeaker 1 00:00:35\n\nYeah. And what's really interesting is they can pick up on this pattern and apply it to problems they haven't seen before.\n\nSpeaker 0 00:00:41\n\nThat's so cool. Yeah. So today we're diving into this new research paper that really takes us inside this COTI process and tries to explore how these powerful LLMs learn.\n\nSpeaker 1 00:00:52\n\nYeah. This paper focuses on two specific ways that transformers the architecture behind LLM. Okay. Can learn chain of thought prompting, stepwise and coherent.\n\nSpeaker 0 00:01:02\n\nOkay. So break this down for me. What's the difference between those two?\n\nSpeaker 1 00:01:05\n\nSo imagine stepwise COTI, like a relay race. Each runner or each step in the reasoning process is isolated, only using the information passed from the runner before it. With Coherent Co-T, it's more like a team huddle. Each member has access to all the information that's been gathered so far. Each step integrates all of the previous reasoning.\n\nSpeaker 0 00:01:28\n\nSo it's like it's constantly checking in with the bigger picture.\n\nSpeaker 1 00:01:31\n\nExactly.\n\nSpeaker 0 00:01:32\n\nOkay, that makes sense. So does this actually make a difference in how well the model performs?\n\nSpeaker 1 00:01:36\n\nThat's what the researchers wanted to find out. And their findings show that a Coherent Co-T actually leads to more accurate results. And the reason for this is pretty interesting. When the model takes all the previous steps into account, it can actually spot potential errors that were made earlier in the chain and self-correct.\n\nSpeaker 0 00:01:53\n\nWait, so it can backtrack and fix its own mistakes? Exactly. Huh, I wouldn't think that would make it more reliable.\n\nSpeaker 1 00:01:59\n\nYou might think that sticking to a strict step-by-step approach would be more reliable. Right. But the researchers found that this holistic approach actually makes the model more robust and adaptable. It's like if you're trying to solve a complex puzzle, it's not enough to just focus on the individual pieces you need. To be able to step back and see how they all fit together.\n\nSpeaker 0 00:02:21\n\nYeah. Okay, so if coherent COTI is so effective, does it matter where errors happen in the chain of reasoning? That's a great question. Would a mistake early on be more damaging than a mistake at the end? Yeah.\n\nSpeaker 1 00:02:33\n\nSo the researchers actually investigated this by introducing deliberate errors or noise into different stages of the COTI process. Okay. What they discovered is that the model is much more sensitive to errors in the intermediate reasoning steps. Oh. Compared to errors in the final answer.\n\nSpeaker 0 00:02:50\n\nSo like a misstep early on can really throw things off, even if it seems like it gets back on track later. Exactly. Kind of like building a house on a shaky foundation. Yes, exactly.\n\nSpeaker 1 00:02:59\n\nExactly. A small error early in the reasoning chain can have this cascading effect, potentially leading to a completely wrong conclusion. Oh, wow. Even if the final step seems logical based on the information that it was given.\n\nSpeaker 0 00:03:12\n\nSo that makes me wonder if these models are so sensitive to the quality of the reasoning that they're trained on. Right. What can we do to help them learn more effectively and avoid these critical errors, especially in the middle of that reasoning process?\n\nSpeaker 1 00:03:25\n\nYeah. So the researchers tackled this question, and their proposed solution is something they call error-aware demonstration. Okay. Instead of only training the LLM on perfectly executed examples of COTI, they introduce examples where mistakes are deliberately included.\n\nSpeaker 0 00:03:42\n\nSo they're kind of giving it a crash course in how to spot and recover from common errors. Exactly. Okay. So how does that actually work?\n\nSpeaker 1 00:03:49\n\nEssentially, they present the model with a problem and then they show it an incorrect reasoning path, clearly labeling it as wrong. Okay. But crucially, they also provide a detailed explanation of why that reasoning is flawed, highlighting the specific errors and the logic behind the correct approach. So it's not just about showing it the wrong answers. It's about giving it a structured way to understand why those answers are wrong.\n\nSpeaker 0 00:04:12\n\nExactly. Okay.\n\nSpeaker 1 00:04:13\n\nThat makes sense.\n\nSpeaker 0 00:04:14\n\nYeah. And it sounds like it would require a lot of thought to create these error-aware examples. Yeah. It does. But the researchers believe it's a critical step in training more robust and reliable LLMs. Okay. Especially as we move toward these more complex and nuanced tasks. And they actually put this method to the test using real LLMs. Okay. And challenging benchmarks like data understanding and math problems. Cool. What they found is that adding these error-aware examples consistently boosted the model's performance.\n\nSpeaker 1 00:04:46\n\nWow. So just by incorporating those imperfect examples, we're kind of teaching it. To think critically and learn from its mistakes. Exactly. It makes you wonder if we as humans could benefit from adopting a similar approach to learning.\n\nSpeaker 0 00:04:58\n\nThat's a thought-provoking question.\n\nSpeaker 1 00:05:00\n\nIt suggests that maybe our traditional methods of education, which often focus on memorization and rote learning, could be missing a crucial element. Yeah. What if instead of striving for perfection, we embraced the idea of learning from our errors?\n\nSpeaker 0 00:05:16\n\nHmm. That's a great point. Maybe by analyzing our own mistakes and understanding the reasoning behind them, we can develop a deeper and more nuanced understanding of the subject matter. I'm curious to know more about how the researchers actually implemented this error-aware training. Okay. What kind of specific examples did they use, and did they test this method with different types of LLMs?\n\nSpeaker 1 00:05:37\n\nThey did, and the results are quite interesting. Okay. But before we dive into those details, maybe let's take a quick break, and then when we come back, we can explore some specific examples. examples of how this air aware training works in practice and discuss the broader implications of this research. That's good. Okay. Let's take a break.\n\nSpeaker 0 00:05:52\n\nSo to answer your question about the types of LLMs they use. Okay. And the tasks they use to\n\nSpeaker 1 00:05:57\n\ntest. They actually worked with several different models, including GPT 3.5 Turbo GPT-4 Mini Gemini Pro and DeepSeek 67B. Wow.\n\nSpeaker 0 00:06:08\n\nThat's quite a lineup. Yeah. So they really wanted to see if this approach worked across different model architectures. Exactly. Okay. So what were some of the tasks that they used?\n\nSpeaker 1 00:06:18\n\nSo one example is date understanding. Okay. Where the model might need to figure out the date 10 days before Christmas in a particular year. Right. Another task involved tracking shuffled objects, like keeping tabs on which ball is held by which person. person after a series of swaps.\n\nSpeaker 0 00:06:33\n\nOkay. Those are great examples of tasks that really require you to think through the steps carefully. Yeah. Not something you can just solve in one go. Right. So how did they actually incorporate the errors into these\n\nSpeaker 1 00:06:43\n\nexamples? So they followed a specific format. They present the problem. Okay. Then they offer a potential line of reasoning that is incorrect. Okay. Clearly marking it as wrong. Okay. Then crucially, they explain exactly why that reasoning is flawed. Okay. Pointing out the missteps or the logical errors.\n\nSpeaker 0 00:07:00\n\nSo it's like, here's a wrong way to think about this problem. Yeah. And here's why it's wrong. Right. Okay. Did they just throw in any kind of errors or were there specific types of errors?\n\nSpeaker 1 00:07:10\n\nSo the errors were designed to be relevant to the task at hand. Okay. For example, in the date understanding problem, an error might involve miscalculating the number of days in a month or forgetting to account for leap years. Okay. In the object tracking task, an error could be mixing up the order of the swaps or misidentifying who ends up with which object.\n\nSpeaker 0 00:07:34\n\nThat makes sense. You want the errors to reflect the kinds of mistakes that the model is likely to make. Exactly. Okay, so I'm curious about the results. Did these error-aware examples actually improve the model's performance?\n\nSpeaker 1 00:07:45\n\nYes, they did. Okay. In most cases, adding these handcrafted, incorrect reasoning paths to the CODI demonstrations led to a significant improvement in the model's accuracy. Okay. Sometimes the improvement was over 5%, which is pretty substantial.\n\nSpeaker 0 00:07:59\n\nYeah, that's huge. Yeah. Especially considering how complex these reasoning tasks are. Okay, so it seems like incorporating these imperfect examples is really key to helping these models learn more effectively.\n\nSpeaker 1 00:08:12\n\nYeah. Right. And they didn't stop there. They also wanted to make sure it was the explanation of the errors that was driving the improvement, not simply the presence of wrong answers.\n\nSpeaker 0 00:08:21\n\nYeah, that's a good point. Just showing the model a bunch of incorrect answers without explaining why they're wrong wouldn't necessarily be helpful. Right. So how did they test that?\n\nSpeaker 1 00:08:30\n\nSo they ran what's called an ablation study. Okay. Where they remove one component at a time to see what effect it has. Okay. In this case, they tested the method with only the incorrect reasoning path.\n\nSpeaker 0 00:08:41\n\nOkay.\n\nSpeaker 1 00:08:42\n\nBut without the explanation of why it was wrong.\n\nSpeaker 0 00:08:45\n\nSo they're kind of giving them the wrong turns, but not the map to get back on course.\n\nSpeaker 1 00:08:49\n\nExactly. And the results were pretty clear. Okay. In most cases, the performance dropped when they removed the error explanations. Interesting. Yeah. This suggests that just presenting incorrect reasoning without explaining why it's wrong can actually confuse the model rather than helping it learn.\n\nSpeaker 0 00:09:04\n\nIt seems like the explanations are really the key ingredient here. Right. If you think about it from our own perspective, if someone keeps showing us the wrong way to do something without explaining why it's wrong, it's going to be a lot harder to learn the right way. Exactly.\n\nSpeaker 1 00:09:18\n\nIt highlights the crucial role of explanation in learning and reasoning for both humans and these advanced language models.\n\nSpeaker 0 00:09:26\n\nThis is all really fascinating, but I have another question. We've been talking about these errors being crafted by humans, but what if we could somehow let the LLMs generate their own incorrect reasoning paths?\n\nSpeaker 1 00:09:39\n\nThat's a really interesting idea, and it's something that the researchers explored as well. Okay. They tested a variation where they used model-generated incorrect reasoning paths instead of the handcrafted ones.\n\nSpeaker 0 00:09:50\n\nOh, so they were letting the model make its own mistakes and then analyze them. Exactly. That's pretty cool.\n\nSpeaker 1 00:09:55\n\nYeah, the idea is that if the incorrect solutions are coming from the model itself, it might be even better at recognizing and avoiding those same mistakes in the future. Of course, they still had to provide the explanations for those errors manually.\n\nSpeaker 0 00:10:08\n\nRight, because the model can come up with the wrong answer. Yes. But it still needs help understanding why it's wrong. Right. Did this learn-from-your-own-mistakes approach actually work?\n\nSpeaker 1 00:10:18\n\nSo the results were quite promising. Okay. They found that in most cases, using model-generated incorrect reasoning paths actually lead to even better performance compared to using the handcrafted ones.\n\nSpeaker 0 00:10:28\n\nWow, that's really impressive. Yeah. It seems like there's something particularly effective about letting these models confront and analyze their own errors.\n\nSpeaker 1 00:10:37\n\nWhat's particularly interesting is that for DeepSeek 67B, Okay. the improvement jumped from 82.88% accuracy Okay.\n\nSpeaker 1 00:10:47\n\nThat's Yeah. This approach seems to be unlocking a whole new level of learning for these models.\n\nSpeaker 0 00:10:47\n\nto 88.36%. That's a huge jump.\n\nSpeaker 0 00:10:53\n\nYeah. It's really amazing. And it opens up some fascinating possibilities for the future. If these models are so good at learning from their own mistakes, what does that mean for AI development as a whole?\n\nSpeaker 1 00:11:05\n\nYeah. Could we create systems that are constantly learning and adapting, becoming more accurate and sophisticated over time?\n\nSpeaker 0 00:11:12\n\nIt's an exciting thought. Imagine LLMs that are constantly evolving and refining their reasoning abilities, almost like self-taught geniuses. Yeah. But before we get too carried away with the future, I want to come back to the practical applications of this research. How can this new understanding of error-aware causes be used to improve the LLMs we use today?\n\nSpeaker 1 00:11:33\n\na great question. I think there's some really promising applications, especially when we think about how LLMs are being used in real world scenarios. But that's discussion for another time.\n\nSpeaker 0 00:11:43\n\nSo we've really been digging into this research on chain of thought prompting and how incorporating errors can actually make these LLMs better. But I'm thinking about the bigger picture, like how can this understanding be applied to the real world?\n\nSpeaker 1 00:11:57\n\nWell, think about how LLMs are being used today. They're powering chatbots, they're writing code, translating languages, even helping with scientific discoveries. And in many of these scenarios, accuracy and reliability are absolutely critical.\n\nSpeaker 0 00:12:11\n\nYeah, exactly. Like if you're relying on an LLM to generate code for like a self-driving car, a single error in reasoning could be catastrophic.\n\nSpeaker 1 00:12:19\n\nPrecisely. And this research suggests that by training these models with error-aware demonstrations, we can make them much more robust and less prone to making those critical mistakes. It's like building in a safety net, teaching them not just the right path, but also how to recognize and recover from wrong turns.\n\nSpeaker 0 00:12:35\n\nSo instead of just blindly following a set of rules, think more critically, and adapt to unexpected situations.\n\nSpeaker 1 00:12:41\n\nRight. And this could have a huge impact on how we design and deploy these models in the future. Imagine LLMs that are less susceptible to biases, less likely to be fooled by misleading information, and better able to explain their reasoning process.\n\nSpeaker 0 00:12:55\n\nIt's almost like we're moving from artificial intelligence to artificial understanding.\n\nSpeaker 1 00:12:59\n\nThat's a great way to put it. We're not just teaching these models to perform tasks. We're teaching them to think in a more human-like way. And that has profound implications for how we interact with and integrate these technologies into our lives.\n\nSpeaker 0 00:13:13\n\nThis whole conversation has really got me thinking differently about how we approach learning, not just for LLMs, but for ourselves. Oh, how so? We often treat mistakes as something negative, something to be avoided at all costs. But this research suggests that mistakes can actually be incredibly valuable learning opportunities.\n\nSpeaker 1 00:13:31\n\nI completely agree. If we're too afraid to make mistakes, we might miss out on the chance to really understand why those mistakes happen and how to avoid them in the future.\n\nSpeaker 0 00:13:39\n\nSo with that famous quote from Thomas Edison, I haven't failed. I've just found 10,000 ways that won't work.\n\nSpeaker 1 00:13:44\n\nAnd that's the spirit I think we need to bring to AI development as well. We need to encourage these models to explore, to experiment, and yes, to make mistakes because it's through those mistakes. that they'll ultimately learn to reason more effectively and achieve a deeper understanding of the world around\n\nSpeaker 0 00:14:00\n\nthem. This deep dive has been really amazing. We've gone from understanding the basics of chain of thought prompting to exploring its nuances, its sensitivity to errors, and this incredible potential of error-aware training.\n\nSpeaker 1 00:14:15\n\nYeah, we've seen how coherent chain of thought allows LLMs to self-correct and how incorporating errors into their training can make them more robust and accurate.\n\nSpeaker 0 00:14:25\n\nAnd we even touched on some of the philosophical implications of this research, raising questions about the future of AI development and the very nature of learning.\n\nSpeaker 1 00:14:33\n\nIt's clear that this is just the beginning of a very exciting journey. There's still so much to learn about how these models work and how we can harness their power responsibly.\n\nSpeaker 0 00:14:42\n\nSo for our listeners who want to delve deeper into this topic, we'll include links to the research paper and other resources in the show notes.\n\nSpeaker 1 00:14:47\n\nAnd we encourage you to continue this exploration on your own. Think about your own learning process. Do you benefit more from seeing perfect examples or from understanding the reasons behind common mistakes?\n\nSpeaker 0 00:14:58\n\nMaybe we can all learn a thing or two from these LLMs and their ability to turn errors into insights. Thanks for joining us on this deep dive. Until next time, keep exploring, keep questioning and keep learning.",{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},[74,77],{"type":75,"count":76},"like","0",{"type":78,"count":76},"dislike",[],[81,91,101,111,120,130,139,149,158],{"id":82,"number":83,"season":36,"title":84,"description":85,"type":66,"image":11,"audio":86,"duration":87,"is_explicit":20,"code":83,"publish_date":88,"listenings":89,"is_private":20,"plans":40,"video":40,"images":90},"002cb48a-e0d4-4803-a6dc-0435a6b8b32a",15,"Radio-Waves Vision Robots: How AI is Giving Machines Superhuman Senses","Join us as we dive into the revolutionary technology of \u003Cb>Panoradar\u003C/b>—a groundbreaking AI system giving robots a superpower once limited to sci-fi: seeing through walls, smoke, and obstacles. From life-saving search and rescue to self-driving breakthroughs, discover how AI and radio waves are reshaping the way robots \"see\" and interact with the world.","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/002cb48a-e0d4-4803-a6dc-0435a6b8b32a.mp3",812,"2024-11-15T06:52:40.209Z",20,{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":92,"number":93,"season":36,"title":94,"description":95,"type":66,"image":11,"audio":96,"duration":97,"is_explicit":20,"code":93,"publish_date":98,"listenings":99,"is_private":20,"plans":40,"video":40,"images":100},"a4e3f283-5622-4551-ba6f-988d8af18aa3",14,"OpenAI's O1 AI's evolution beyond mere pattern recognition.","Dive into the mind-bending world of AI's \"reasoning era\" as we explore OpenAI's groundbreaking O1 model (formerly Strawberry). O1 can strategize and think step-by-step like humans do. \u003Cp>We unpack \u003Cb>Sequoia Capital'\u003C/b>s latest insights on \u003Cb>OpenAI\u003C/b>’s model O1, a game-changer in artificial reasoning, and dive into cognitive architectures.\u003C/p>","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/a4e3f283-5622-4551-ba6f-988d8af18aa3.mp3",1125,"2024-11-14T14:03:24.832Z",22,{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":102,"number":103,"season":36,"title":104,"description":105,"type":66,"image":11,"audio":106,"duration":107,"is_explicit":20,"code":103,"publish_date":108,"listenings":109,"is_private":20,"plans":40,"video":40,"images":110},"cc2a2fe8-8a86-4472-83c4-c7349da3c165",13,"SoundStorm Unleashed: Revolutionizing Audio Generation with Lightning Speed","Dive into the frontier of #audio innovation as we break down\u003Cp>This cutting-edge model generates audio at speeds \u003Cb>100x faster\u003C/b> than previous systems, redefining what's possible in \u003Cb>#music, #podcasts, #games\u003C/b>, and more. Join us as we explore the neural #codecs, parallel #decoding, and confidence-based sampling that make \u003Cb>SoundStorm\u003C/b> so powerful. From hyper-realistic #dialogues to adaptive #soundscapes, discover how this tech could transform #entertainment, #accessibility, and even #healthcare.\u003C/p>","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/cc2a2fe8-8a86-4472-83c4-c7349da3c165.mp3",1076,"2024-11-13T15:27:38.972Z",19,{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":112,"number":113,"season":36,"title":114,"description":115,"type":66,"image":11,"audio":116,"duration":117,"is_explicit":20,"code":113,"publish_date":118,"listenings":109,"is_private":20,"plans":40,"video":40,"images":119},"216dbfed-d0f6-4b90-a774-b3a3a8931523",12,"The Great AI Chip Race: Tech Giants Break Free from NVIDIA","In this episode, we explore how \u003Cb>Amazon, Google\u003C/b>, and other tech behemoths are shaking up the #AI industry by developing their own custom chips. From Amazon's secretive \u003Cb>Annapurna Labs\u003C/b> to Google's powerful \u003Cb>Trillium\u003C/b> processor, discover how this shift could revolutionize AI accessibility and pricing. Learn why major companies are reducing their reliance on \u003Cb>NVIDIA\u003C/b>, the implications for consumers and startups, and what this means for the future of \u003Cb>artificial intelligence\u003C/b>. Join us for an insightful discussion about what might be the biggest power shift in tech since the personal computing revolution.","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/216dbfed-d0f6-4b90-a774-b3a3a8931523.mp3",363,"2024-11-13T10:48:08.094Z",{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":121,"number":122,"season":36,"title":123,"description":124,"type":66,"image":11,"audio":125,"duration":126,"is_explicit":20,"code":122,"publish_date":127,"listenings":128,"is_private":20,"plans":40,"video":40,"images":129},"708dbdd4-2e2b-4cd8-8491-f50f73a38dde",11,"The Next 18 Months: Anthropic’s Case for Urgent AI Regulation","Join us on \u003Cb>The Next 18 Months\u003C/b>, where we dive deep into \u003Cb>\u003Ca href=\"https://\">Anthropic\u003C/a>\u003C/b>'s compelling vision for the future of artificial intelligence and the crucial role that regulation will play in shaping it. Discover why experts believe we have just 18 months to get crucial safety measures in place. With AI's capabilities advancing at breakneck speed, Anthropic’s latest report warns that time is running out to establish guidelines that protect society without stifling innovation. In this episode, we explore why experts say the next 18 months could make or break AI’s future and discuss the steps Anthropic believes are necessary to responsibly harness this transformative technology. Tune in as we dissect the risks, the revolutionary potential, and the pressing need for policies that balance safety with progress.","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/708dbdd4-2e2b-4cd8-8491-f50f73a38dde.mp3",1086,"2024-11-07T08:01:24.402Z",21,{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":131,"number":132,"season":36,"title":133,"description":134,"type":66,"image":11,"audio":135,"duration":136,"is_explicit":20,"code":132,"publish_date":137,"listenings":83,"is_private":20,"plans":40,"video":40,"images":138},"b6025c83-32ab-4144-8890-1e1c8256a0e9",10,"Building the Future of Gaming - AI Next-frame prediction.","Join us as we explore the mind-bending world of \u003Cb>AI-powered gaming\u003C/b>, where \u003Cb>next frame prediction technolog\u003C/b>y is revolutionizing how we interact with virtual worlds. We dive deep into groundbreaking projects from\u003Cb> Descartes and Etche\u003C/b>d, including an \u003Cb>AI version of Minecraft\u003C/b> that responds to players' imagination in real-time. Our expert guest breaks down the technology behind these innovations, from the specialized \u003Cb>Sohu chip\u003C/b> to the broader implications for education, healthcare, and creative expression. Discover how AI isn't just changing how we play games – it's reshaping how we interact with technology itself.","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/b6025c83-32ab-4144-8890-1e1c8256a0e9.mp3",893,"2024-11-02T11:22:32.534Z",{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":140,"number":141,"season":36,"title":142,"description":143,"type":66,"image":11,"audio":144,"duration":145,"is_explicit":20,"code":141,"publish_date":146,"listenings":147,"is_private":20,"plans":40,"video":40,"images":148},"da0fa61b-1d59-47f4-83d0-7aa1f0e4edcc",9,"The Search Wars: ChatGPT's New Web Powers vs Google & Perplexity","In today’s episode, we're diving into the evolving world of search engines and how groundbreaking upgrades to \u003Cb>\u003Ca href=\"https://\">ChatGPT\u003C/a>\u003C/b>'s search capabilities could be changing the game. Imagine asking a question and getting a direct, sourced answer instead of endless scrolling. We'll explore the magic behind ChatGPT's new real-time web access, how it stacks up against \u003Cb>\u003Ca href=\"https://\">Google Search\u003C/a>\u003C/b> and \u003Cb>\u003Ca href=\"https://\">Perplexity\u003C/a>\u003C/b>, and why this tech revolution might reshape how we explore, learn, and connect with information. From travel tips to stock updates, join us as we break down this “search revolution”—and debate who might come out on top!","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/da0fa61b-1d59-47f4-83d0-7aa1f0e4edcc.mp3",564,"2024-10-31T17:34:05.300Z",18,{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":150,"number":151,"season":36,"title":152,"description":153,"type":66,"image":11,"audio":154,"duration":155,"is_explicit":20,"code":151,"publish_date":156,"listenings":147,"is_private":20,"plans":40,"video":40,"images":157},"5e9f1099-955a-476a-83da-d29d29b9062a",8,"AI Mediator: How Google DeepMind’s Habermas Could Transform Conflict Resolution","Imagine a world where AI doesn’t just mediate disagreements but actively helps prevent conflicts from escalating, both in person and online. In this episode, we explore Google DeepMind’s latest breakthrough, Habermas (\u003Ca href=\"https://\">Habermas Machine dataset\u003C/a>)—a powerful AI designed to resolve disputes by finding genuine common ground among diverse viewpoints. Joined by an expert guest, we’ll dive into how this technology works, the promising research behind it, and the vast implications for promoting peace and understanding.","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/5e9f1099-955a-476a-83da-d29d29b9062a.mp3",774,"2024-10-30T08:51:40.551Z",{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},{"id":159,"number":160,"season":36,"title":161,"description":162,"type":66,"image":11,"audio":163,"duration":164,"is_explicit":20,"code":160,"publish_date":165,"listenings":83,"is_private":20,"plans":40,"video":40,"images":166},"d7427338-815f-469d-b655-9cf0acf5747f",7,"The Centaur Conundrum. Cognitive AI model.","Dive into the fascinating world of \u003Cb>\u003Ca href=\"https://huggingface.co/marcelbinz/Llama-3.1-Centaur-70B\">Centaur\u003C/a>\u003C/b>, an ambitious AI model. \u003Cp>Explore how this groundbreaking technology is blurring the lines between artificial intelligence and cognitive science, and uncover the incredible potential it holds for unlocking the secrets of human behavior and cognition.\u003C/p>","storage/podcasts/a916dc01-1db2-4f42-aaf0-e30bf94c491d/episodes/d7427338-815f-469d-b655-9cf0acf5747f.mp3",1598,"2024-10-29T14:14:44.823Z",{"image_80":13,"image_180":14,"image_240":15,"image_600":16,"image_1280":17},["Reactive",168],{"$ssite-config":169},{"_priority":170,"env":174,"name":175,"url":176},{"name":171,"env":172,"url":173},-10,-15,-4,"production","podcast-website","https://greatleveler.mave.digital/",["Set"],["ShallowReactive",179],{"$63LOZx6kQb":-1},"/ep-16",{"common":182},{"activeTab":183,"isShareActive":20,"episodes":184,"contentPosition":20,"podcast":5,"podcastSlug":185,"showPlayer":20,"activeTrack":40,"pauseTrack":20,"activeEpisode":61,"titleHeight":186,"website":187,"listenUrl":40,"isMobileShareActive":20,"isDataLoaded":34,"favicon":50,"customDomain":40,"episodesCount":186},"listen",[],"greatleveler",0,{"button_text":37,"button_link":38,"is_indexing":34,"ym_id":-1,"gtm_id":-1}]