/*! elementor – v3.6.6 – 08-06-2022 */
.elementor-widget-image{text-align:center}.elementor-widget-image a{display:inline-block}.elementor-widget-image a img[src$=”.svg”]{width:48px}.elementor-widget-image img{vertical-align:middle;display:inline-block}
/*! elementor – v3.6.6 – 08-06-2022 */
.e-container.e-container–row .elementor-spacer-inner{width:var(–spacer-size)}.e-container.e-container–column .elementor-spacer-inner,.elementor-column .elementor-spacer-inner{height:var(–spacer-size)}
- Hugging Face reaches $2 billion valuation to build the GitHub of machine learning
- The Future of AI Is Thrilling, Terrifying, Confusing, and Fascinating
- Google is beta testing its AI future – Test Kitchen
Hugging Face reaches $2 billion valuation to build the GitHub of machine learning
https://www.brinknews.com/artificial-intelligence-uses-a-computer-chip-designed-for-video-games-does-that-matter/
- Romain Dillet @romaindillet
- May 9, 2022
Notes:
Hugging Face has closed a new round of funding. It’s a $100 million Series C round with a big valuation. Following today’s funding round, Hugging Face is now worth $2 billion.
Hugging Face released the Transformers library on GitHub and instantly attracted a ton of attention — it currently has 62,000 stars and 14,000 forks on the platform.
With Transformers, you can leverage popular NLP models, such as BERT, GPT-2, T5 or DistilBERT and use those models to manipulate text in one way or another. For instance, you can classify text, extract information, automatically answer questions, summarize text, generate text, etc.
Due to the success of this library, Hugging Face quickly became the main repository for all things related to machine learning models
- The Future of AI Is Thrilling, Terrifying, Confusing, and Fascinating
https://www.theringer.com/2022/5/10/23064766/the-future-of-ai-is-thrilling-terrifying-confusing-and-fascinating
- Derek Thompson
- May 10, 2022
Notes:
This might sound like a hot take but it’s not: In 50 years, when historians look back on the crazy 2020s, they might point to advances in artificial intelligence as the most important long-term development of our time. We are building machines that can mimic human language, human creativity, and human thought. What will that mean for the future of work, morality, and economics? Bestselling author Steven Johnson joins the podcast to talk about the most exciting and scary ideas in artificial intelligence and an article he wrote for The New York Times Magazine about the frontier of AI.
GPT-3 is a kind of a subset of AI. It’s a specific implementation of a category known as large language models and it also belongs to the family of neural nets and the family of deep learning.
it is basically a neural net that is modeled very vaguely on the structure of the human brain, but we should not take that kind of biological analogy too far. That is, it goes through a process that’s called a training process, where it is shown a massive corpus of text, basically a kind of a curated version of the open worldwide web, Wikipedia, a body of digitized books, that are part of the public domain.
And if you have a big enough corpus of text and a deep enough neural network, it turns out that computers over the last couple of years have gotten quite good at continuing human-authored text.
It is using its understanding of millions and millions of emails already sent to predict the next word in the email that you are sending.
- Google is beta testing its AI future
https://www.cmaj.ca/content/193/34/E1351
- James Vincent
- May 11, 2022
Notes:
It’s clear that the future of Google is tied to AI language models. At this year’s I/O conference, the company announced a raft of updates that rely on this technology, from new “multisearch” features that let you pair image searches with text queries to improvements for Google Assistant and support for 24 new languages in Google Translate.
Google itself has seriously mishandled internal criticism, firing employees who raised issues with bias in language models and damaging its reputation with the AI community.Researchers continue to find issues with AI language models, from failings with gender and racial biases to the fact that these models have a tendency to simply make things up
Now, though, the company seems to be taking something of a step back — or rather a slower step forward.
AI Test Kitchen, an Android app that will give select users limited access to Google’s latest and greatest AI language model, LaMDA 2. The model itself is an update to the original LaMDA announced at last year’s I/O and has the same basic functionality: you talk to it, and it talks back. But Test Kitchen wraps the system in a new, accessible interface, which encourages users to give feedback about its performance.
The app has three modes: “Imagine It,” “Talk About It,” and “List It,” with each intended to test a different aspect of the system’s functionality. “Imagine It” asks users to name a real or imaginary place, which LaMDA will then describe (the test is whether LaMDA can match your description); “Talk About It” offers a conversational prompt (like “talk to a tennis ball about dog”) with the intention of testing whether the AI stays on topic; while “List It” asks users to name any task or topic, with the aim of seeing if LaMDA can break it down into useful bullet points (so, if you say “I want to plant a vegetable garden,” the response might include sub-topics like “What do you want to grow?” and “Water and care”).
Once you see LaMDA in action, it’s hard not to imagine how technology like this will change Google in the future, particularly its biggest product: Search. Although Google stresses that AI Test Kitchen is just a research tool, its functionality connects very obviously with the company’s services. Keeping a conservation on-topic is vital for Google Assistant, for example, while the “List It” mode in Test Kitchen is near-identical to Google’s “Things to know” feature, which breaks down tasks and topics into bullet points in search.