Industries
Industries
Our Mission:
AI Research
Automation
We are accelerating the advent of General AI through
cutting-edge neuro-symbolic research.
We are accelerating the advent of General AI through cutting-edge neuro-symbolic research.
Trusted by the best
Solutions
We are building the solutions
for modern AI workflows
Platform
Extensity Research Services (ERS) SaaS Platform
Our next generation neuro-symbolic AI platform is solving some of the hardest business and research challenges of our time:
Ensuring content generation with consistency, relevance, quality, and distinction at scale
Validate results based on trusted sources
Gain insights from data through research ana analytics
Publish more effectively your findings through research automation
Framework
Agent Workflows API
We are creating agents that can self-improve based on their own experiences and predictions.
Create agents with custom models
Autonomously navigate and execute actions with greater efficiency and accuracy
Integration with various interfaces
Support for multimodal inputs and outputs
Research
Neuro-Symbolic AI
By combining the strengths of deep learning with human-like symbolic knowledge and reasoning, we aim to advance AI research to new frontiers.
Foundation models for scientific workflows
Reinforcement learning
Symbolic Reasoning and Knowledge Representation
Domain Generalization
Industries
No matter the industry,
we've got you covered
Empowering diverse industries with innovative
AI technology, created for scalability
Empowering diverse industries with innovative AI technology, created for scalability
Content Creation
Our advanced AI platform transforms how you create long-form content. From research papers and technical documentation to books, websites, and comprehensive reports, we deliver publication-ready content that maintains accuracy while engaging your audience. Experience faster, more efficient content creation that adapts to your unique requirements and elevates your communication impact.
Market Analysis
Whether you're crafting a compelling book, engaging blog posts, eye-catching ads, or any other type of content, our platform delivers high-quality results tailored to your needs. You can seamlessly generate diverse content variations that resonate with your audience and elevate your brand's presence. Experience the future of content creation with unparalleled efficiency and precision.
Content Creation
Our advanced AI platform transforms how you create long-form content. From research papers and technical documentation to books, websites, and comprehensive reports, we deliver publication-ready content that maintains accuracy while engaging your audience. Experience faster, more efficient content creation that adapts to your unique requirements and elevates your communication impact.
Market Analysis
Whether you're crafting a compelling book, engaging blog posts, eye-catching ads, or any other type of content, our platform delivers high-quality results tailored to your needs. You can seamlessly generate diverse content variations that resonate with your audience and elevate your brand's presence. Experience the future of content creation with unparalleled efficiency and precision.
Content Creation
Our advanced AI platform transforms how you create long-form content. From research papers and technical documentation to books, websites, and comprehensive reports, we deliver publication-ready content that maintains accuracy while engaging your audience. Experience faster, more efficient content creation that adapts to your unique requirements and elevates your communication impact.
Market Analysis
Whether you're crafting a compelling book, engaging blog posts, eye-catching ads, or any other type of content, our platform delivers high-quality results tailored to your needs. You can seamlessly generate diverse content variations that resonate with your audience and elevate your brand's presence. Experience the future of content creation with unparalleled efficiency and precision.
Content Creation
Our advanced AI platform transforms how you create long-form content. From research papers and technical documentation to books, websites, and comprehensive reports, we deliver publication-ready content that maintains accuracy while engaging your audience. Experience faster, more efficient content creation that adapts to your unique requirements and elevates your communication impact.
Market Analysis
Whether you're crafting a compelling book, engaging blog posts, eye-catching ads, or any other type of content, our platform delivers high-quality results tailored to your needs. You can seamlessly generate diverse content variations that resonate with your audience and elevate your brand's presence. Experience the future of content creation with unparalleled efficiency and precision.
Use Cases
Real-world application
Discover how our technology is being used to solve
real-world challenges and drive innovation
Discover how our technology is being used to solve real-world challenges and drive innovation
From story to bestseller
50% of people want to write a book, but only about 0,1% actually publish one. With story.one this can change, by enabling anyone to become an author.
Discover More
Discover More
Discover More
Automated Theorem Proving
Discover how Third Wish Group and ExtensityAI combine neuro-symbolic AI with formal proof systems to revolutionize mathematical verification.
Discover More
Discover More
Discover More
We help future students find their personalized program
Studying is hard, but knowing what to study even harder. We built an interactive speech-to-speech study bot to guide future academics paths.
Discover More
Discover More
Discover More
Testimonials
Hear from those who've
experienced the difference
“In partnership with ExtensityAI, we’re pushing the boundaries of publishing by integrating cutting-edge neuro-symbolic AI technology. This AI-driven platform transforms lived experiences into compelling narratives, enabling people from all walks of life to share their stories in ways that were previously unimaginable. By revolutionizing the book creation process, we're not just writing books—we're giving a voice to the unheard and preserving untold stories for future generations.”
Hannes Steiner
Founder @ Story.one
“Very nice. I love it. Totally confident that SW developers in a few years will learn a development paradigm like that in their training.”
Thomas Wildberger
Partner @ Prophet
“Interesting reading for those that are looking the evolution of Symbolic AI methods.”
Pietro Leo
Executive Architect @ IBM
“The work is brave (very much challenging a dominant paradigm), and novel. Without hesitation, I give it my highest recommendation”
Gary Marcus
Professor emeritus @ New York University
“Working on SymbolicAI now – great work! This will be foundational!”
Alexander Morrise
Head of Research @ graphistry
“Enjoyed working with an exceptionally talented team that values precision and quality at an outstanding pace. Their framework delivers what it promises and I see great growth potential.”
Andreas Stöckl
Professor @ University of Applied Sciences Upper Austria
“I truly admire what you guys have done. This idea indeed came after having some realisations on mixing some of the things that I am interested. Particularly, cognitive theory, formal language in Mathematics, Neural Nets and consciousness.”
Juan Zambrano
CTO @ Third Wish Group
“In partnership with ExtensityAI, we’re pushing the boundaries of publishing by integrating cutting-edge neuro-symbolic AI technology. This AI-driven platform transforms lived experiences into compelling narratives, enabling people from all walks of life to share their stories in ways that were previously unimaginable. By revolutionizing the book creation process, we're not just writing books—we're giving a voice to the unheard and preserving untold stories for future generations.”
Hannes Steiner
Founder @ Story.one
“Very nice. I love it. Totally confident that SW developers in a few years will learn a development paradigm like that in their training.”
Thomas Wildberger
Partner @ Prophet
“Interesting reading for those that are looking the evolution of Symbolic AI methods.”
Pietro Leo
Executive Architect @ IBM
“The work is brave (very much challenging a dominant paradigm), and novel. Without hesitation, I give it my highest recommendation”
Gary Marcus
Professor emeritus @ New York University
“Working on SymbolicAI now – great work! This will be foundational!”
Alexander Morrise
Head of Research @ graphistry
“Enjoyed working with an exceptionally talented team that values precision and quality at an outstanding pace. Their framework delivers what it promises and I see great growth potential.”
Andreas Stöckl
Professor @ University of Applied Sciences Upper Austria
“I truly admire what you guys have done. This idea indeed came after having some realisations on mixing some of the things that I am interested. Particularly, cognitive theory, formal language in Mathematics, Neural Nets and consciousness.”
Juan Zambrano
CTO @ Third Wish Group
“In partnership with ExtensityAI, we’re pushing the boundaries of publishing by integrating cutting-edge neuro-symbolic AI technology. This AI-driven platform transforms lived experiences into compelling narratives, enabling people from all walks of life to share their stories in ways that were previously unimaginable. By revolutionizing the book creation process, we're not just writing books—we're giving a voice to the unheard and preserving untold stories for future generations.”
Hannes Steiner
Founder @ Story.one
“Very nice. I love it. Totally confident that SW developers in a few years will learn a development paradigm like that in their training.”
Thomas Wildberger
Partner @ Prophet
“Interesting reading for those that are looking the evolution of Symbolic AI methods.”
Pietro Leo
Executive Architect @ IBM
“The work is brave (very much challenging a dominant paradigm), and novel. Without hesitation, I give it my highest recommendation”
Gary Marcus
Professor emeritus @ New York University
“Working on SymbolicAI now – great work! This will be foundational!”
Alexander Morrise
Head of Research @ graphistry
“Enjoyed working with an exceptionally talented team that values precision and quality at an outstanding pace. Their framework delivers what it promises and I see great growth potential.”
Andreas Stöckl
Professor @ University of Applied Sciences Upper Austria
“I truly admire what you guys have done. This idea indeed came after having some realisations on mixing some of the things that I am interested. Particularly, cognitive theory, formal language in Mathematics, Neural Nets and consciousness.”
Juan Zambrano
CTO @ Third Wish Group
“In partnership with ExtensityAI, we’re pushing the boundaries of publishing by integrating cutting-edge neuro-symbolic AI technology. This AI-driven platform transforms lived experiences into compelling narratives, enabling people from all walks of life to share their stories in ways that were previously unimaginable. By revolutionizing the book creation process, we're not just writing books—we're giving a voice to the unheard and preserving untold stories for future generations.”
Hannes Steiner
Founder @ Story.one
“Very nice. I love it. Totally confident that SW developers in a few years will learn a development paradigm like that in their training.”
Thomas Wildberger
Partner @ Prophet
“Interesting reading for those that are looking the evolution of Symbolic AI methods.”
Pietro Leo
Executive Architect @ IBM
“The work is brave (very much challenging a dominant paradigm), and novel. Without hesitation, I give it my highest recommendation”
Gary Marcus
Professor emeritus @ New York University
“Working on SymbolicAI now – great work! This will be foundational!”
Alexander Morrise
Head of Research @ graphistry
“Enjoyed working with an exceptionally talented team that values precision and quality at an outstanding pace. Their framework delivers what it promises and I see great growth potential.”
Andreas Stöckl
Professor @ University of Applied Sciences Upper Austria
“I truly admire what you guys have done. This idea indeed came after having some realisations on mixing some of the things that I am interested. Particularly, cognitive theory, formal language in Mathematics, Neural Nets and consciousness.”
Juan Zambrano
CTO @ Third Wish Group
Publications
Read the science
behind our technology
Read the science
behind our technology
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Last revised: 21 Aug 2024 / Arxiv.com
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
Last revised: 2 May 2023 / Arxiv.com
Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution.
Last revised: 9 Oct 2024 / Arxiv.com
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings
Last revised: 8 Oct 2024 / Arxiv.com
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts.
Last revised: 1 Oct 2024 / Arxiv.com
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data.
Stay connected
Join our community and meet other creators
Blog
Latest news from our blog
Latest news from our blog
Get news and product updates
ExtensityAI FlexCo
Wels, Austria
office@extensity.ai
INDUSTRIES
CUSTOMER STORIES
Get news and product updates
ExtensityAI FlexCo
Wels, Austria
office@extensity.ai
INDUSTRIES
CUSTOMER STORIES
Get news and product updates
ExtensityAI FlexCo
Wels, Austria
office@extensity.ai
INDUSTRIES
CUSTOMER STORIES