- US - English
- China - 简体中文
- India - English
- Japan - 日本語
- Malaysia - English
- Singapore - English
- Taiwan – 繁體中文
In the rapidly evolving landscape of computing, AI-powered PCs are revolutionizing the way we interact with technology. These intelligent systems are not just about faster processing or smarter algorithms; they are fundamentally transforming how we manage and store our data.
With AI PCs, the importance of data placement has never been more critical. The advanced storage features in these systems ensure that data is not only stored efficiently but also accessed and used in ways that enhance performance and security. As we delve into the era of AI-driven computing, understanding where and how to place user data becomes a pivotal aspect of harnessing the full potential of these cutting-edge machines. With SSDs — the predominant devices where PC data is stored — we have considerable opportunity to enhance user experience while running AI applications.
SSDs are currently unable to discern which data is most important to the user or system. This is a significant limitation, especially as we move toward a future where AI data needs to be prioritized. AI is critical because it often involves real-time processing and decision-making, which requires quick access to the most relevant data. Treating all data the same — as is done with current architectures like the LRU (least recently used) data structure — can lead to inefficiencies and slower performance.
With the increasing importance of on-device AI, we need to give higher priority to the data required by AI and other data-intensive applications. To do this, SSDs need additional information from the host marking which data (like AI) is most important (called host assists). SSDs can, in turn, place the data in its low-latency cache with the lowest access time. For example, our own internal tests show that we can improve model loading times by up to 80% when we use host-assist capabilities. On-device AI is going to run with many smaller models residing in storage, and this AI will need to load a particular model on demand. (I talked about this issue in a previous blog : AI in PC: Why not? | Micron Technology Inc.)
Model loading time is relevant for determining the improvements achievable by the SSD and host assists. There are many different approaches to receiving assistance from the host. Micron has collaborated with Microsoft to introduce several improvements that help SSDs understand which data is most important. Some of these approaches also enhance the reliability of the SSDs.
- The timestamp feature gives the SSD a world-time reference point. The operating system provides the SSD with more instances of real-world time. This helps the SSD track the age of data more accurately, leading to better reliability. By knowing the exact age of the data, the SSD can make more informed decisions about which data to prioritize and which to discard. With timestamping, SSD can manage internal caching more effectively, keeping recently accessed data readily available, reducing latency and improving responsiveness. Timestamps can also help identify old or unused data, which makes it easier to target for deletion during garbage collection. This process frees up space and keeps write speeds optimal. Further, with timestamping, write cycles can be evenly distributed across the drive, extending the SSD’s lifespan and ensuring consistent performance.
- The host memory buffer (HMB) feature gives the SSD sole access to a part of the system DRAM for its needs. Increasing the HMB allows the SSD to have more buffer space for data and data properties. This results in bigger FTL (flash translation layer) tables, which enable more efficient data mapping and reduce SSD overhead. With a larger buffer, the SSD can cache important data more effectively, leading to faster access times and improved performance. Leveraging the system's DRAM as cache, the HMB significantly cuts down the time needed to access data, and this reduction in time leads to quicker read and write operations. In addition, the data transfers can be more efficient, enhancing system performance. This increased performance comes without any additional cost and power as this buffer space is being created in the system memory.
- The host system can send additional metadata on read and write commands to the SSD. This metadata, known as data hints, provides the SSD with insights into how the data will be used (for example, read frequently). By understanding the use patterns, the SSD can prioritize data with these hints, ensuring that it gets the fastest access. The metadata can also include hints such as expected lifespan or access patterns. These hints allow the SSD to manage data placement more effectively, reduce write amplification and improve overall performance.
These enhancements, which will be integrated into an upcoming Windows OS release, were announced by Microsoft’s Scott Lee at the FMS conference in August 2024. Working in collaboration with Microsoft, Micron is blazing the path for the most optimal SSD performance for the future.