Solving The ‘natural Gap” Issue In AI Implementation
- 14 Feb 2020
Artificial intelligence or AI is often misunderstood and improperly used in common language. In truth, there are only a few true AI companies in the world that have developed frameworks to build deep learning tools and products. At present, there are a few popular mainstream open source artificial intelligence deep learning frameworks for model training in the world, such as Google’s Tensorflow, Eclipse’s Deeplearning4j (Skymind), and Facebook’s Pytorch.
From basic algorithms to implementation on hardware chips, the multitude of artificial intelligence implementation systems are causing a bottleneck such as poor platform compatibility, low operating efficiency for large-scale applications and time to market for the finished product. This has resulted in artificial intelligence causing a natural divide between R&D and large-scale industrial applications.
What is the natural divide in AI
Of the global artificial intelligence companies, barely 5% can develop autonomous artificial intelligence technology, and the remaining 95% of the industry are built end-to-end using open source AI frameworks. However, the application side of production and deployment based on these frameworks can provide compatibility issues. Add to this, the basic differences in implementing solutions on hardware, and a natural divide emerges between artificial intelligence and industrial applications.
To provide some context, a typical machine learning model needs to go through processes such as data extraction, data preprocessing, feature selection, model training, model testing and result display, model deployment, and so much more. Each point in the entire process is composed of one or more small steps. For example, something like model development is a loop-iterative process that requires multiple iterations to finally reach the intended result. The difficulty lies in being able to integrate these models into production environments, with the teams that build this model are often not the team that uses it in the real world.
In the current standard process, developers spend countless hours solving multi-platform compatibility issues. For example, the algorithm for recognizing faces is different from the algorithm for recognizing items, but to implement simultaneous recognition in one screen, you need to run the algorithm twice, which makes the process inefficient.
An added layer of complexity only deepens or firmly establishes the natural divide as obstacles such as secondary development between training and application in different languages, different code forms and paths, differences in the underlying computing libraries between frameworks leading to optimization difficulties and so many other issues often result in the inability to synchronize with new versions.
These bottlenecks lead to difficulties in the development of artificial intelligence in industry applications.
A solution to the natural divide
In China, which has a more developed AI ecosystem, we have been exploring solutions to this issue. Together with the Zhangjiang Laboratory of Brain and Intelligent Technology Institute and the Shanghai Brain Sciences and Brain Research Center Joint Laboratory, we have developed a potential solution to this issue. It is the world’s first platform to manage the deployment of industrial-scale application of artificial intelligence.
This platform focuses on the compatibility issue as a whole. It solves the technical obstacles of artificial intelligence- from scientific research to production deployment and is compatible with the current mainstream global deep learning frameworks. We worked hard to also ensure that the platform is compatible with the mainstream chip vendors and big data system platforms while supporting cloud or local deployment and providing developers with basic service modules. What this means is that all the steps within the process, such as data preprocessing to the final model service application, developers can write their own machine learning mechanism, and not have compatibility issues due to the simple application programming interface.
During deployment, a pipeline step will perform a segment of the required tasks in the machine learning model production cycle. Combining multiple pipeline steps together forms a complete model production pipeline.
This means that the step-based streamlined packaging makes model development and deployment more efficient and easier to use. It does this by enabling developers to create “production lines” machine learning development and quickly deploy artificial intelligence models, which was previously difficult to implement due to the natural constraints of AI development. This allows for rapid transformation of model results and greatly increases the social and commercial viability of artificial intelligence.
“Southeast Asia is still in its infancy when it comes to AI, but we see great potential and are in the process of building the infrastructure required for large scale industrial applications of AI solutions in the region.” – Shawn Tan
Implications for Southeast Asia
Southeast Asia is still in its infancy when it comes to AI, but we see great potential and are in the process of building the infrastructure required for large scale industrial applications of AI solutions in Southeast Asia. From our recent MoU with Huawei Cloud to develop a new cloud and artificial intelligence (AI) Innovation Hub in order to foster innovation and talent development in ASEAN to our efforts to grow local talent, we are committed to enabling the ecosystem in Southeast Asia.
The new solution will help ease the teething issues for the region, as the ecosystem matures and starts developing solutions for the market. This will also help speed up talent and shorten the timeline for the region, as the obstacles faced by markets such as China and the US are no longer a major issue in Southeast Asia.
Therefore, we see great potential for aggressive growth of the region’s AI ecosystem and look forward to being part of the evolving technology landscape.
Shawn Tan, CEO of Skymind
Shawn Tan is the Founder & CEO of Skymind Global Ventures. He started this company along with co-founder Dr. Goh Shu Wei, to provide supported access to the market for open source AI platforms and to invest onto the AI ecosystem building. Starting from China, Shawn has successfully built Skymind into one of the largest AI brands in the region and has now successfully expanded Skymind’s presence across Europe and Asia.
Shawn’s entrepreneurial journey started in 2004 when at just 19 years old he started a business trading chemical products from Europe to Malaysia. In five years, his portfolio increased to include chemical packaging and distribution, and investment into real estate. Together with Dr. Goh, Shawn then co-founded Universal Pave in 2012, which secured an exclusive global commercialisation partnership with the commercial company of Research Institute of Highway, Ministry of Transport China .
In their effort to diversify their business, Shawn together with Dr. Goh invested in a private investment company – Jetset International Limited. Jetset was incorporated in Hong Kong since 2015. Jetset’s investment portfolio has spread across real estate developments, technology companies and civil engineering projects globally.
Source: Tech Collective SEA