πDataCS:A large model data computing system empowers faster, extensive AI models

OpenPie’s 2023 Annual Technology Forum, entitled “Large Model Data Computing System”, concluded successfully in Shanghai. With well-known scholars, experts and entrepreneurs at home and abroad gathered on the Bund, Ray Von, the founder, and CEO of OpenPie, provided an in-depth exposition of avant-garde theories and technology breakthroughs for data computation in the era of large models. At the event, OpenPie released its new product line -PieDataComputingSystem (πDataCS), which aims to optimize large model computation processes.

Ray Von,CEO & Chairman of Board,OpenPie

During the forum, Ray Von expounded on his innovative theories of data computing systems to the guests. “In a large-model data computing system, everything in this world along with its dynamics could be digitized, which forms data. Data could be leveraged as input to train an initial set of models.” he elucidated, “These trained models could form new sets of computation rules, which are then incorporated into data computing systems. Such type of processes keeps iterating, enables infinite exploration of AI’s potential.” He emphasized, “This new surge of large models has spawned new species of data computation under a cloud-native infrastructure. Companies and organizations such as OpenAI, OpenPie, Databricks and Snowflake would continuously empower the creation and training of large models to establish computation rules. According to PwC’s recent analysis, AI is projected to contribute approximately 26.2% to China’s GDP by 2030. These large models will drive an increasing magnitude of GDP-based economic activities. ”

Furthermore, Ray Von presented OpenPie’s industrial theories, product matrix and strategic roadmap of the large model data computing system. He revealed, “As the first computation engine of this system, PieCloudDB’s progressive maturation of virtual warehousing products and services empowers enterprises with cloud-based data warehousing solutions for digital transformation. By optimizing the allocation of cloud resources, enterprises are encouraged to leverage their own digital assets and unlock limitless data computation capabilities, thereby fortifying their competitive positions. The second engine, PieCloudVector, is exclusively tailored to facilitate the storage and querying of vast amount vector data, aiding the utilization of large multimodal AI models. The third computation engine, PieCloudML, is purpose-built to support machine learning processes and large models. It seamlessly integrates with mainstream machine learning ecosystems and serves as a data computation engine for large models by integrating all multimodal data resources of enterprises. Besides, OpenPie’s latest smart caching hardware technology, πFPGA, has not only achieved key advancements in hardware library, but also enhanced the speed of AI large model computation. This is accomplished by implementing best practices in data storage, virtual data warehousing, and specific professional domains such as neural networks. OpenPie’s self-developed storage infrastructure, JANM, serves as the foundation for high-speed computation engines in multi-cloud environments. JANM facilitates data interoperability and enabling multiple engines to simultaneously process the same set of data. This ensures robustly supports and safeguards cloud storage in large model data computing systems.” Currently, OpenPie’s large model data computing system offers multiple versions in the domestic market, including public cloud, community edition, enterprise edition and all-in-one machine. These versions are designed to meet diverse demands of businesses and cater to various business scenarios. They have been instrumental in constructing AI data infrastructures for customers across serval industries, such as finance, manufacturing, healthcare, and education.

Speaking at the end of his speech, Ray Von further elaborated on the strategic vision for the product ecosystem of the large model data computing system. OpenPie will focus on a top-down approach with a three-layered structure comprising scenario AI, foundational models, and data computing systems. By fostering collaboration and mutual benefits, OpenPie seeks to enhance industry-specific AI applications through the integration of large model technology, ultimately generating substantial value for enterprises and society.

Andy Motten, the founder, director, and general manager of Leaf Tree Labs, attended the forum and delivered a comprehensive presentation on their next-generation smart hardware technology. He provided a detailed introduction to the design philosophy, technology innovations and piratical applications of their technology, showcasing how Leaf Tree Labs is pushing the boundaries in this field. Andy highlighted that Leaf Tree Labs is an IP design house for System-on-Chip architectures with over 20 years of experience in designing real time logic systems.

During his presentation, Andy emphasized the need for more efficient data computing to reduce costs and power consumption. Leaf Tree Labs has achieved this by using specialization data architectures tailored to the algorithms. With ascending demand for data computing, conventional processing capabilities are struggling to keep up, including both single and multi-core processors. This is particularly significant for AI and deep learning models, which has grown exponentially in size. To address these challenges, Leaf Tree Labs has initiated partnership with OpenPie, recognizing the potential synergies between them. They plan to collaborate in various application domains from storage as well as computation layer. OpenPie will be the first user of Leaf Tree Labs’ storage IP, including the compression engine, while jointly exploring opportunities to enhance database operations through new hardware architectures.