- The Intelligence
- Posts
- ChatGPT got some upgrades š¤© Week 8
ChatGPT got some upgrades š¤© Week 8

ChatGPT got some upgrades š¤©
Goood afternoon!
You might have already seen it, but OpenAI released some big upgrades for itās baby and our best friend, ChatGPT.
In other big news; we will be undergoing some upgrades ourself. That means there wonāt be any of our newsletters hitting your mailbox in the upcoming 2 weeks.
Letās dive into this weekās news.
Table of contents š
1 prompt gives you photorealistic 1 minute videos š³
Appleās first AI model? š
What is the impact of AI on cybersecurity? š¤
Meta joining the AI-chip race š¹ļø
Updates š¬
1 prompt gives you photorealistic 1 minute videos
Sora, an AI model developed by OpenAI that can create realistic and imaginative scenes from text instructions. Sora is a text-to-video model that can generate videos up to a minute long while maintaining visual quality and adherence to the userās prompt. It is capable of understanding and simulating the physical world in motion, generating complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions. However, the current model has weaknesses, such as struggling with accurately simulating the physics of a complex scene and understanding specific instances of cause and effect. OpenAI is taking several important safety steps ahead of making Sora available in its products, including working with domain experts and building tools to help detect misleading content. Despite extensive research and testing, OpenAI acknowledges that it cannot predict all the beneficial ways people will use the technology, nor all the ways people will abuse it. Sora is a diffusion model that uses a transformer architecture and builds on past research in DALLĀ·E and GPT models. It is capable of generating entire videos all at once or extending generated videos to make them longer. The model is also able to take an existing still image and generate a video from it, as well as take an existing video and extend it or fill in missing frames.
In other OpenAI news, OpenAI is testing a memory feature for ChatGPT, allowing users to retain information across multiple conversations, thereby improving the relevance and helpfulness of future interactions. Users can control ChatGPT's memory by explicitly asking it to remember something, viewing and deleting specific memories, or turning off memory entirely. The memory system evolves with user interactions and is not linked to specific conversations, meaning deleting a chat does not erase its memories. This feature is being rolled out to a small portion of free and Plus users and will be expanded to more users soon. Additionally, memory-enabled GPTs will be available, and users will have control over how and when their memories are used in chats.
All this development must be payed somehow, luckily for OpenAI, money is of no shortage. OpenAI surpassed $2 billion in revenue in December 2023, primarily due to the success of its ChatGPT product. The company anticipates doubling this figure in 2025, driven by strong demand from businesses seeking to integrate generative AI tools in the workplace. OpenAI's annualized revenue reached $1.6 billion in December, up from $1.3 billion in mid-October. The San Francisco-based startup has been valued at over $80 billion by investors. OpenAI's CEO, Sam Altman, is currently in talks with investors, including the UAE, to raise funds for a tech initiative aimed at boosting global chip-building capacity and expanding AI capabilities.
Appleās first AI model?
Apple has released an open-source AI model called "MGIE" that can edit images based on natural language instructions. The model leverages multimodal large language models (MLLMs) to interpret user commands and perform pixel-level manipulations. MGIE can handle various editing aspects, such as Photoshop-style modification, global photo optimization, and local editing. The model was presented in a paper accepted at the International Conference on Learning Representations (ICLR) 2024, one of the top venues for AI research. MGIE is available as an open-source project on GitHub, where users can find the code, data, and pre-trained models. The project also provides a demo notebook that shows how to use MGIE for various editing tasks. MGIE is a breakthrough in the field of instruction-based image editing, which is a challenging and important task for both AI and human creativity. MGIE demonstrates the potential of using MLLMs to enhance image editing and opens up new possibilities for cross-modal interaction and communication.
šļø Discussion of the week š§
Generative artificial intelligence will continue to dominate in 2024, with predictions from experts on how the technology will be used, abused, and leveraged in surprising ways. The cybersecurity stakes are raised as the boom of AI continues, with a U.S. presidential election in November, a continued skills gap for the cybersecurity sector, and the rise of ransomware threats. Threat actors will shift their attention to AI systems as the newest threat vector to target organizations, through vulnerabilities in sanctioned AI deployments and blind spots from employeesā unsanctioned use of AI tools. The democratization of AI tools will lead to a rise in more advanced attacks against firmware and even hardware. The widespread availability of AI poses an unprecedented challenge for cybersecurity, and if we fail, the risk of successful hacks becoming commonplace and wreaking havoc in the near future is a clear and present danger.
Hackers use AI in numerous ways, here are a few listed by Morgan Stanley:
Social engineering schemes
These schemes rely on psychological manipulation to trick individuals into revealing sensitive information or making other security mistakes. They include a broad range of fraudulent activity categories, including phishing, vishing and business email compromise scams.
AI allows cybercriminals to automate many of the processes used in social-engineering attacks, as well as create more personalized, sophisticated and effective messaging to fool unsuspecting victims. This means cybercriminals can generate a greater volume of attacks in less timeāand experience a higher success rate.
Password hacking
Cybercriminals exploit AI to improve the algorithms they use for deciphering passwords. The enhanced algorithms provide quicker and more accurate password guessing, which allows hackers to become more efficient and profitable. This may lead to an even greater emphasis on password hacking by cybercriminals.
Deepfakes
This type of deception leverages AIās ability to easily manipulate visual or audio content and make it seem legitimate. This includes using phony audio and video to impersonate another individual. The doctored content can then be broadly distributed online in secondsāincluding on influential social media platformsāto create stress, fear or confusion among those who consume it.
Cybercriminals can use deepfakes in conjunction with social engineering, extortion and other types of schemes.
Data poisoning
Hackers āpoisonā or alter the training data used by an AI algorithm to influence the decisions it ultimately makes. In short, the algorithm is being fed with deceptive information, and bad input leads to bad output.
Additionally, data poisoning can be difficult and time-consuming to detect. So, by the time itās discovered, the damage could be severe.
More updates š¬
Meta joining the AI-chip race
Meta Platforms, the parent company of Facebook, is set to deploy a second-generation custom chip in its data centers this year to support its AI initiatives. This in-house chip aims to reduce Meta's reliance on Nvidia chips, which dominate the market and contribute to the rising costs associated with running AI workloads. The company has been investing heavily in specialized chips and reconfiguring data centers to accommodate them, as the scale of its operations could potentially save hundreds of millions of dollars in annual energy costs and billions in chip purchasing costs. The new chip, internally referred to as "Artemis," is designed for inference tasks, which involve using models to make ranking judgments and generate responses to user prompts. Meta plans to use this chip in conjunction with off-the-shelf graphics processing units (GPUs) to deliver optimal performance and efficiency on Meta-specific workloads.