The post François Chollet: AGI progress is accelerating towards 2030, symbolic models will reshape machine learning, and coding agents are revolutionizing automationThe post François Chollet: AGI progress is accelerating towards 2030, symbolic models will reshape machine learning, and coding agents are revolutionizing automation

François Chollet: AGI progress is accelerating towards 2030, symbolic models will reshape machine learning, and coding agents are revolutionizing automation

2026/04/11 07:40
16분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 [email protected]으로 연락주시기 바랍니다


New AGI lab aims to revolutionize machine learning with symbolic models, moving beyond traditional deep learning.

Key Takeaways

  • AGI progress is expected to accelerate, with significant developments anticipated around 2030.
  • The new AGI research lab, NDA, aims to create a fundamentally different branch of machine learning from deep learning.
  • Symbolic models could provide more efficient and generalizable solutions compared to traditional parametric models.
  • AI and machine learning are expected to evolve towards optimality, moving away from current technological stacks.
  • Coding agents succeed due to the verifiable reward signals offered by code, enabling automation in formal domains.
  • The progress of reasoning models in non-verifiable domains like essay writing will be slow due to reliance on costly human-annotated data.
  • Code-based training environments have significantly advanced AI capabilities in programming.
  • AGI requires a model that can efficiently learn and adapt to new tasks with minimal data, similar to human learning.
  • We are on a trajectory to automate economically useful work before achieving true AGI.
  • Building AGI on top of current LLMs is seen as inefficient and not optimal for future AI research.
  • The inevitability of AI progress suggests a need for more efficient foundational structures.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The limitations of current AI models highlight the challenges in handling complex, non-verifiable tasks.
  • Structured training environments have a transformative impact on AI performance, particularly in programming tasks.
  • The efficiency of human-like learning is a fundamental requirement for achieving AGI.

Guest intro

François Chollet is the founder of a startup focused on developing AGI through program synthesis, which he co-founded with Zapier co-founder Mike Knoop after leaving Google in November 2024. He created the Keras deep-learning library in 2015 and published the ARC-AGI benchmark in 2019 to measure AI systems’ ability to solve novel reasoning problems. In 2024, he launched the ARC Prize, a $1 million competition to advance progress toward artificial general intelligence.

Why AGI progress is inevitable

  • AGI progress is expected to continue accelerating, with significant developments anticipated around 2030.
  • — François Chollet

  • The inevitability of AI progress suggests that stopping it is unlikely.
  • — François Chollet

  • Understanding the timeline for advancements in AGI is crucial for AI development.
  • The prediction about the future of AGI indicates the inevitability of AI progress.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The efficiency of human-like learning is a fundamental requirement for achieving AGI.
  • — François Chollet

The new frontier in machine learning at NDA

  • The goal of the new AGI research lab, NDA, is to create a new branch of machine learning that is fundamentally different from deep learning.
  • — François Chollet

  • Knowledge of current machine learning paradigms and the limitations of deep learning is crucial to appreciate this new approach.
  • This novel approach in AI research could lead to significant advancements in the field.
  • Understanding the limitations of current deep learning approaches is essential for recognizing the potential benefits of symbolic models.
  • — François Chollet

  • Symbolic models can provide more efficient and generalizable machine learning solutions compared to traditional parametric models.
  • The development of new machine learning paradigms at NDA could reshape the future of AI research.

The shift towards symbolic models

  • Symbolic models can provide more efficient and generalizable machine learning solutions compared to traditional parametric models.
  • — François Chollet

  • The potential benefits of symbolic models include improved efficiency and generalization.
  • — François Chollet

  • Understanding the limitations of current deep learning approaches is crucial for recognizing the advantages of symbolic models.
  • This novel approach to machine learning could significantly improve efficiency and generalization.
  • The shift towards symbolic models represents a move towards more optimal machine learning solutions.
  • The development of symbolic models could lead to significant advancements in AI technology.

The future of AI and machine learning

  • Machine learning and AI will evolve towards optimality, moving away from current stacks.
  • — François Chollet

  • The inevitability of AI progress suggests a need for more efficient foundational structures.
  • — François Chollet

  • Understanding the current limitations of AI technology is crucial for anticipating future advancements.
  • The prediction about the future direction of AI technology highlights the need for more efficient foundational structures.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The shift towards optimality represents a move towards more efficient and effective AI solutions.

The success of coding agents

  • Coding agents succeed because code offers a verifiable reward signal, enabling automation in formally verifiable domains.
  • — François Chollet

  • Understanding how reward signals function in machine learning is crucial for recognizing the success of coding agents.
  • The verifiability of code enables automation in formal domains, such as mathematics.
  • This explanation clarifies the mechanics behind the success of coding agents.
  • The success of coding agents suggests broader implications for other domains like mathematics.
  • The development of coding agents represents a significant advancement in AI technology.
  • The verifiable reward signals offered by code are crucial for the success of coding agents.

Challenges in non-verifiable domains

  • The progress of reasoning models in non-verifiable domains like essay writing will be slow due to reliance on costly human-annotated data.
  • — François Chollet

  • Understanding the challenges of applying AI to creative tasks like essay writing is crucial for recognizing the limitations of current AI models.
  • The reliance on costly human-annotated data is a significant barrier to progress in non-verifiable domains.
  • This insight highlights the limitations of current AI models in handling complex, non-verifiable tasks.
  • The challenges in non-verifiable domains underscore the need for more efficient AI models.
  • The slow progress in non-verifiable domains suggests a need for new approaches in AI research.
  • The limitations of current AI models highlight the challenges in handling complex, non-verifiable tasks.

Advancements in code-based training environments

  • The creation of code-based training environments has significantly advanced AI capabilities in programming.
  • — François Chollet

  • Understanding how AI models are trained is crucial for recognizing the importance of structured environments for effective learning.
  • The development of code-based training environments represents a significant advancement in AI technology.
  • Structured training environments have a transformative impact on AI performance, particularly in programming tasks.
  • The success of code-based training environments suggests broader implications for other domains.
  • The creation of code-based training environments underscores the importance of verifiable reward signals in AI training.
  • The advancements in code-based training environments highlight the potential for further improvements in AI capabilities.

The trajectory towards automation

  • We are on a trajectory to automate economically useful work before achieving true AGI.
  • — François Chollet

  • Understanding the distinction between automation and AGI is crucial for recognizing the current advancements in AI.
  • The prediction about the trajectory towards automation highlights the potential for significant advancements in AI technology.
  • The development of automation technologies represents a significant step towards achieving AGI.
  • The trajectory towards automation suggests a need for more efficient AI models.
  • The current advancements in AI automation set expectations for future developments.
  • The potential for full automation in verifiable domains underscores the importance of verifiable reward signals in AI training.

The inefficiency of building AGI on current LLMs

  • Building AGI on top of current LLMs would be inefficient and not optimal for future AI research.
  • — François Chollet

  • Understanding the limitations of current LLM technology is crucial for recognizing the inefficiency of building AGI on top of them.
  • This opinion provides a critical perspective on the direction of AI research and the need for more optimal approaches.
  • The inefficiency of building AGI on current LLMs suggests a need for new approaches in AI research.
  • The development of more efficient AI models represents a significant step towards achieving AGI.
  • The need for optimality in AI research underscores the importance of efficiency in future AI developments.
  • The limitations of current LLM technology highlight the challenges in building AGI on top of them.
Disclosure: This article was edited by Editorial Team. For more information on how we create and review content, see our Editorial Policy.

New AGI lab aims to revolutionize machine learning with symbolic models, moving beyond traditional deep learning.

Key Takeaways

  • AGI progress is expected to accelerate, with significant developments anticipated around 2030.
  • The new AGI research lab, NDA, aims to create a fundamentally different branch of machine learning from deep learning.
  • Symbolic models could provide more efficient and generalizable solutions compared to traditional parametric models.
  • AI and machine learning are expected to evolve towards optimality, moving away from current technological stacks.
  • Coding agents succeed due to the verifiable reward signals offered by code, enabling automation in formal domains.
  • The progress of reasoning models in non-verifiable domains like essay writing will be slow due to reliance on costly human-annotated data.
  • Code-based training environments have significantly advanced AI capabilities in programming.
  • AGI requires a model that can efficiently learn and adapt to new tasks with minimal data, similar to human learning.
  • We are on a trajectory to automate economically useful work before achieving true AGI.
  • Building AGI on top of current LLMs is seen as inefficient and not optimal for future AI research.
  • The inevitability of AI progress suggests a need for more efficient foundational structures.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The limitations of current AI models highlight the challenges in handling complex, non-verifiable tasks.
  • Structured training environments have a transformative impact on AI performance, particularly in programming tasks.
  • The efficiency of human-like learning is a fundamental requirement for achieving AGI.

Guest intro

François Chollet is the founder of a startup focused on developing AGI through program synthesis, which he co-founded with Zapier co-founder Mike Knoop after leaving Google in November 2024. He created the Keras deep-learning library in 2015 and published the ARC-AGI benchmark in 2019 to measure AI systems’ ability to solve novel reasoning problems. In 2024, he launched the ARC Prize, a $1 million competition to advance progress toward artificial general intelligence.

Why AGI progress is inevitable

  • AGI progress is expected to continue accelerating, with significant developments anticipated around 2030.
  • — François Chollet

  • The inevitability of AI progress suggests that stopping it is unlikely.
  • — François Chollet

  • Understanding the timeline for advancements in AGI is crucial for AI development.
  • The prediction about the future of AGI indicates the inevitability of AI progress.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The efficiency of human-like learning is a fundamental requirement for achieving AGI.
  • — François Chollet

The new frontier in machine learning at NDA

  • The goal of the new AGI research lab, NDA, is to create a new branch of machine learning that is fundamentally different from deep learning.
  • — François Chollet

  • Knowledge of current machine learning paradigms and the limitations of deep learning is crucial to appreciate this new approach.
  • This novel approach in AI research could lead to significant advancements in the field.
  • Understanding the limitations of current deep learning approaches is essential for recognizing the potential benefits of symbolic models.
  • — François Chollet

  • Symbolic models can provide more efficient and generalizable machine learning solutions compared to traditional parametric models.
  • The development of new machine learning paradigms at NDA could reshape the future of AI research.

The shift towards symbolic models

  • Symbolic models can provide more efficient and generalizable machine learning solutions compared to traditional parametric models.
  • — François Chollet

  • The potential benefits of symbolic models include improved efficiency and generalization.
  • — François Chollet

  • Understanding the limitations of current deep learning approaches is crucial for recognizing the advantages of symbolic models.
  • This novel approach to machine learning could significantly improve efficiency and generalization.
  • The shift towards symbolic models represents a move towards more optimal machine learning solutions.
  • The development of symbolic models could lead to significant advancements in AI technology.

The future of AI and machine learning

  • Machine learning and AI will evolve towards optimality, moving away from current stacks.
  • — François Chollet

  • The inevitability of AI progress suggests a need for more efficient foundational structures.
  • — François Chollet

  • Understanding the current limitations of AI technology is crucial for anticipating future advancements.
  • The prediction about the future direction of AI technology highlights the need for more efficient foundational structures.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The shift towards optimality represents a move towards more efficient and effective AI solutions.

The success of coding agents

  • Coding agents succeed because code offers a verifiable reward signal, enabling automation in formally verifiable domains.
  • — François Chollet

  • Understanding how reward signals function in machine learning is crucial for recognizing the success of coding agents.
  • The verifiability of code enables automation in formal domains, such as mathematics.
  • This explanation clarifies the mechanics behind the success of coding agents.
  • The success of coding agents suggests broader implications for other domains like mathematics.
  • The development of coding agents represents a significant advancement in AI technology.
  • The verifiable reward signals offered by code are crucial for the success of coding agents.

Challenges in non-verifiable domains

  • The progress of reasoning models in non-verifiable domains like essay writing will be slow due to reliance on costly human-annotated data.
  • — François Chollet

  • Understanding the challenges of applying AI to creative tasks like essay writing is crucial for recognizing the limitations of current AI models.
  • The reliance on costly human-annotated data is a significant barrier to progress in non-verifiable domains.
  • This insight highlights the limitations of current AI models in handling complex, non-verifiable tasks.
  • The challenges in non-verifiable domains underscore the need for more efficient AI models.
  • The slow progress in non-verifiable domains suggests a need for new approaches in AI research.
  • The limitations of current AI models highlight the challenges in handling complex, non-verifiable tasks.

Advancements in code-based training environments

  • The creation of code-based training environments has significantly advanced AI capabilities in programming.
  • — François Chollet

  • Understanding how AI models are trained is crucial for recognizing the importance of structured environments for effective learning.
  • The development of code-based training environments represents a significant advancement in AI technology.
  • Structured training environments have a transformative impact on AI performance, particularly in programming tasks.
  • The success of code-based training environments suggests broader implications for other domains.
  • The creation of code-based training environments underscores the importance of verifiable reward signals in AI training.
  • The advancements in code-based training environments highlight the potential for further improvements in AI capabilities.

The trajectory towards automation

  • We are on a trajectory to automate economically useful work before achieving true AGI.
  • — François Chollet

  • Understanding the distinction between automation and AGI is crucial for recognizing the current advancements in AI.
  • The prediction about the trajectory towards automation highlights the potential for significant advancements in AI technology.
  • The development of automation technologies represents a significant step towards achieving AGI.
  • The trajectory towards automation suggests a need for more efficient AI models.
  • The current advancements in AI automation set expectations for future developments.
  • The potential for full automation in verifiable domains underscores the importance of verifiable reward signals in AI training.

The inefficiency of building AGI on current LLMs

  • Building AGI on top of current LLMs would be inefficient and not optimal for future AI research.
  • — François Chollet

  • Understanding the limitations of current LLM technology is crucial for recognizing the inefficiency of building AGI on top of them.
  • This opinion provides a critical perspective on the direction of AI research and the need for more optimal approaches.
  • The inefficiency of building AGI on current LLMs suggests a need for new approaches in AI research.
  • The development of more efficient AI models represents a significant step towards achieving AGI.
  • The need for optimality in AI research underscores the importance of efficiency in future AI developments.
  • The limitations of current LLM technology highlight the challenges in building AGI on top of them.
Disclosure: This article was edited by Editorial Team. For more information on how we create and review content, see our Editorial Policy.

Loading more articles…

You’ve reached the end


Add us on Google

`;
}

function createMobileArticle(article) {
const displayDate = getDisplayDate(article);
const editorSlug = article.editor ? article.editor.toLowerCase().replace(/\s+/g, ‘-‘) : ”;
const captionHtml = article.imageCaption ? `

${article.imageCaption}

` : ”;
const authorHtml = article.isPressRelease ? ” : `
`;

return `


${captionHtml}

${article.subheadline ? `

${article.subheadline}

` : ”}

${createSocialShare()}

${authorHtml}
${displayDate}

${article.content}

${article.isPressRelease ? ” : article.isSponsored ? `

Disclosure: This is sponsored content. It does not represent Crypto Briefing’s editorial views. For more information, see our Editorial Policy.

` : `

Disclosure: This article was edited by ${article.editor}. For more information on how we create and review content, see our Editorial Policy.

`}

`;
}

function createDesktopArticle(article, sidebarAdHtml) {
const editorSlug = article.editor ? article.editor.toLowerCase().replace(/\s+/g, ‘-‘) : ”;
const displayDate = getDisplayDate(article);
const captionHtml = article.imageCaption ? `

${article.imageCaption}

` : ”;
const categoriesHtml = article.categories.map((cat, i) => {
const separator = i < article.categories.length – 1 ? ‘|‘ : ”;
return `${cat}${separator}`;
}).join(”);
const desktopAuthorHtml = article.isPressRelease ? ” : `
`;

return `

${categoriesHtml}

${article.subheadline ? `

${article.subheadline}

` : ”}

${desktopAuthorHtml}
${displayDate}
${createSocialShare()}

${captionHtml}

${article.content}
${article.isPressRelease ? ” : article.isSponsored ? `
Disclosure: This is sponsored content. It does not represent Crypto Briefing’s editorial views. For more information, see our Editorial Policy.

` : `

Disclosure: This article was edited by ${article.editor}. For more information on how we create and review content, see our Editorial Policy.

`}

`;
}

function loadMoreArticles() {
if (isLoading || !hasMore) return;

isLoading = true;
loadingText.classList.remove(‘hidden’);

// Build form data for AJAX request
const formData = new FormData();
formData.append(‘action’, ‘cb_lovable_load_more’);
formData.append(‘current_post_id’, lastLoadedPostId);
formData.append(‘primary_cat_id’, primaryCatId);
formData.append(‘before_date’, lastLoadedDate);
formData.append(‘loaded_ids’, loadedPostIds.join(‘,’));

fetch(ajaxUrl, {
method: ‘POST’,
body: formData
})
.then(response => response.json())
.then(data => {
isLoading = false;
loadingText.classList.add(‘hidden’);

if (data.success && data.has_more && data.article) {
const article = data.article;
const sidebarAdHtml = data.sidebar_ad_html || ”;

// Check for duplicates
if (loadedPostIds.includes(article.id)) {
console.log(‘Duplicate article detected, skipping:’, article.id);
// Update pagination vars and try again
lastLoadedDate = article.publishDate;
loadMoreArticles();
return;
}

// Add to mobile container
mobileContainer.insertAdjacentHTML(‘beforeend’, createMobileArticle(article));

// Add to desktop container with fresh ad HTML
desktopContainer.insertAdjacentHTML(‘beforeend’, createDesktopArticle(article, sidebarAdHtml));

// Update tracking variables
loadedPostIds.push(article.id);
lastLoadedPostId = article.id;
lastLoadedDate = article.publishDate;

// Execute any inline scripts in the new content (for ads)
const newArticle = desktopContainer.querySelector(`article[data-article-id=”${article.id}”]`);
if (newArticle) {
const scripts = newArticle.querySelectorAll(‘script’);
scripts.forEach(script => {
const newScript = document.createElement(‘script’);
if (script.src) {
newScript.src = script.src;
} else {
newScript.textContent = script.textContent;
}
document.body.appendChild(newScript);
});
}

// Trigger Ad Inserter if available
if (typeof ai_check_and_insert_block === ‘function’) {
ai_check_and_insert_block();
}

// Trigger Google Publisher Tag refresh if available
if (typeof googletag !== ‘undefined’ && googletag.pubads) {
googletag.cmd.push(function() {
googletag.pubads().refresh();
});
}

} else if (data.success && !data.has_more) {
hasMore = false;
endText.classList.remove(‘hidden’);
} else if (!data.success) {
console.error(‘AJAX error:’, data.error);
hasMore = false;
endText.textContent=”Error loading more articles”;
endText.classList.remove(‘hidden’);
}
})
.catch(error => {
console.error(‘Fetch error:’, error);
isLoading = false;
loadingText.classList.add(‘hidden’);
hasMore = false;
endText.textContent=”Error loading more articles”;
endText.classList.remove(‘hidden’);
});
}

// Set up IntersectionObserver
const observer = new IntersectionObserver(function(entries) {
if (entries[0].isIntersecting) {
loadMoreArticles();
}
}, { threshold: 0.1 });

observer.observe(loadingTrigger);
})();

© Decentral Media and Crypto Briefing® 2026.

Source: https://cryptobriefing.com/francois-chollet-agi-progress-is-accelerating-towards-2030-symbolic-models-will-reshape-machine-learning-and-coding-agents-are-revolutionizing-automation-y-combinator-startup-podcast/

시장 기회
Delysium 로고
Delysium 가격(AGI)
$0.0114
$0.0114$0.0114
-1.04%
USD
Delysium (AGI) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, [email protected]으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!