ListedTech
  • Data Portals
    • Portal for Industry
    • Portal for Institutions
    • Webinars
  • Product Categories & Reports
  • Resources
    • Blog
    • Podcast
    • Documentation
    • Webinars
  • About Us
    • Our Story
    • Data Overview
    • Traditional IT Research vs. ListEdTech
    • In the Media
    • Contact Us

Search the website...

Go to Portal
Posted on December 7, 2023 | by James Wiley

The Second Year of ChatGPT: Two Things to Watch 

Artificial Intelligence

Just over one year ago, OpenAI introduced ChatGPT, a tool that harnesses artificial intelligence (AI) to engage in human-like conversations—answering questions, composing emails, and coding. Since its introduction, this tool has generated considerable attention for its impressive achievements, including passing law, business, and medical school exams, and tackling complex programming queries. Likewise, ChatGPT has also raised significant concerns, such as facilitating the creation and dissemination of misinformation, exhibiting biases, and making student cheating on exams easier.  

But what should we consider as ChatGPT enters its second year? In this week’s post, we will explore this question and offer two things you should pay attention to over this year. 

Increasing Focus on Foundation Models 

Most discussions about ChatGPT focus on it as a standalone application or as something that integrates with other products, such as Khan Academy’s tutoring tool, Khanmigo. While these views of ChatGPT capture most of its uses, ChatGPT is also an example of a robust model that can serve as the core of an artificial intelligence infrastructure. This type of model, called a “foundation model,” trains on large amounts of content (text, images, etc.) and can adapt to perform a wide range of downstream tasks, such as object recognition or information extraction (Figure 1).  

One key component of a foundation model is “transfer learning,” where the knowledge obtained from training transfers to downstream tasks or from one task (object recognition, for example) to another, such as information extraction. Improving this transfer learning requires robust computing power, such as promised by the partnership of Amazon and Anthropic, and increased training data (extending the knowledge cutoff date in OpenAI from 2021 to 2024). 

While foundation models like ChatGPT present risks, we expect them to become more prevalent in the coming year. As a result, we see ChatGPT as becoming more than an application that we use on its own or as part of another piece of technology, but instead, as an engine that powers an entire artificial system that looks to address real-world tasks.  

Figure 1: Foundation Model (Source: Center for Research on Foundation Models (CRFM) & Institute for Human-Centered Artificial Intelligence (HAI), Stanford University).

Continued Questions about “Openness” 

The founder of OpenAI initially described their company’s mission as “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” This mission statement (and the company’s name) suggested that the technology would be transparent, reusable, and extensible enough to allow developers to build ChatGPT-powered applications.  

However, since 2019, Open AI itself shifted from a non-profit to a for-profit. Also, in 2020, the company restricted full access to the GPT-3 model, an earlier version of the current ChatGPT, to Microsoft. Finally, OpenAI restricted free use access to its AI-powered text-to-image tool, DALL.E, and replaced it with a freemium offering. As a result, the “openness” of OpenAI has been cast into doubt. 

However, according to some recent research, the notion of “openness” in artificial intelligence is problematic. For example, any developer seeking to develop an artificial intelligence tool to compete with ChatGPT would need access to training data, software to build the tool, and computing power to train it. However, the authors argue that this is nearly impossible as access to training data is often kept secret, large corporations usually control the software required to build such models, and the computing power to train models is beyond the reach of any typical developer or company. 

Summary 

Of course, the coming year will involve discussions about how institutional leaders might consider deploying artificial intelligence in general and Chat GPT in particular at their institutions. For example, some organizations, such as College Complete America, have recently released playbooks to guide leaders through this process.  Likewise, there will be a concentration on the impact of new Chat GPT functionality, such as the rumored “Q* Project,” which suggests Chat GPT may move into solving elementary math problems. Lastly, we will hear more about efforts to address the risks of artificial intelligence by encouraging explainability and fairness in developing AI models.  

However, while these are important focus areas, we assert that foundation models and the debate around “openness” have the greatest chance of changing the education technology landscape. Foundation models, for example, may result in the convergence of technology in education, as many solutions resting on a single model may combine downstream tasks into one solution, perhaps shrinking the technology landscape. Likewise, if “openness” occurs in artificial intelligence, we may see a democratization of AI development, with smaller vendors creating competitors to Chat GPT and other AI tools, which may increase the availability of technology available to institutional leaders.  

We will monitor these two areas throughout the year and report our findings. Please reach out with feedback or questions.  

Post navigation

RFPs Trends and Updates – December 2023
The Year 2023 in Review
  • Subscribe to Our Newsletter
  • CAPTCHA image

    * All fields are required.

  • Listen to Our Podcast


  • Recent Posts

    • Who Are HigherEd’s Tech Leaders? October 15, 2025
    • Anthology’s Chapter 11 Filing: Breaking Up to Refocus October 1, 2025
    • Rethinking Market Saturation in EdTech September 24, 2025
    • Thesis: From Unit4 Spin‑Off to SIS Specialist September 17, 2025
    • How Institutions Discover What Tech Their Peers Are Using September 3, 2025

Stay in the know…

Blog & News
Higher Ed Market Data

Who Are HigherEd’s Tech Leaders?

This year, I’ve been revisiting some of the classic business books: Blue Ocean Shift, Free, The Innovator’s Dilemma, Zero to One, and of course, Crossing the Chasm. That last one really got me thinking about early adopters. In tech markets, they’re the people (or in HigherEd, the institutions) who are comfortable taking risks, trying something new, and shaping the market ... Who Are HigherEd’s Tech Leaders?  Read More
Market Data Market Movements

Anthology’s Chapter 11 Filing: Breaking Up to Refocus

September 2025 marked a major turning point for Anthology, the owner of Blackboard and several other higher education technology platforms. The company filed for Chapter 11 bankruptcy in the U.S. after efforts to sell itself or parts of its business outside of court failed. The filing is not a liquidation. Instead, it is a structured reorganization designed to ... Anthology’s Chapter 11 Filing: Breaking Up to Refocus  Read More
ListEdTech Market Data

Rethinking Market Saturation in EdTech

Market saturation is a concept we often discuss at ListEdTech because it comes up frequently with our clients. Investors want to know if a market still has room to grow, while startups want to understand whether they are entering a space with opportunities or one that is already crowded. Last year, we explored saturation by ... Rethinking Market Saturation in EdTech  Read More
Footer Logo - LisTedTECH
  • Contact Us
  • Frequently Asked Questions
  • Privacy Policy
  • Terms of Use