ACROFAN

[OID Korea 2018] OpenInfra Days Korea 2018 Press Conference

Published : Tuesday, July 24, 2018, 9:42 am
ACROFAN=Yong-Man Kwon | yongman.kwon@acrofan.com | SNS
OpenInfra Days Korea 2018, co-hosted by OpenStack Foundation, OpenStack Korea User Group and Cloud Native Computing & K8S Korea, will be held on June 28-29 at COEX in Gangnam- gu, Seoul. This event was developed from 'OpenStack Day Korea' as a part of efforts to promote various open infrastructure technologies and to revitalize related ecosystem.

The event will be held under the theme of "Open Infrastructure: OpenStack, Containers, and Cloud Native Computing”, and it will cover the ecosystem of open infrastructure technology created by combining openStack and cloud native computing technologies such as Kubernetes and Container. The OpenStack Foundation, as well as global open source foundations such as Cloud Native Computing Foundation (CNCF) and Open Networking Foundation (ONF), will participate in the event, and technical presentations, exhibition booths, and workshops will be held under the sponsorships of Samsung Electronics, SK Telecom, NetApp, manTech, and Open Source Consulting.

At the press conference of OpenInfra Days Korea 2018, Jonathan Bryce Executive Director, Mark Collier COO, Lauren Sell Vice President of Marketing & Community Services, and lan Y. Choi, President of OpenStack Korea User Group, attended. OpenStack Foundation pointed out AI-related infrastructure environment as an area where the use of OpenStack-based infrastructure begins in earnest. Also, it emphasized that its ultimate goal is to solve many problems in the open source world by making users and technology companies working together.

 
▲ The press conference was attended by key executives of OpenStack Foundation and Korea User Group representatives

Q ) The vGPU support in 'Nova' was introduced, and there were various restrictions on the configuration of the vGPU environment up to now. Does vGPU support in Nova also have these kinds of restrictions? (Acrofan)

vGPU is a fairly new concept. Within a GPU, there are hundreds or thousands of small processors, components, and logic gates, and we have to deal with them as a piece of hardware. NVIDIA and Intel are making all of these components possible to virtualize within the GPU. The chip manufacturer's support will be needed to enable the use of vGPUs, and its restrictions will be similar to other environments.

On the other hand, there is a limited number of chips available for virtualization. The "Cyborg" project in OpenStack uses a GPU or FPGA as an accelerator without virtualizing the chip, creating an accelerator inventory in the cloud and providing it to users or several tenants. If vGPUs and non-virtualized accelerators are used in the cloud at the same time, it is possible to build an infrastructure with powerful processing performance.

Q ) In the case of Kata Containers, virtualization is used for container isolation. In general, the container has an advantage of high efficiency by sharing the kernels. However, it seems that using virtualization might reduce efficiency. What is the reason of combining virtualization with container technology, and what additional security benefits can be gained? (Acrofan)

Container technology uses C-groups or namespaces to isolate kernel components. Therefore, no matter what tasks are performed in the container, the same kernel must be used, and the CPU, storage, and network will all run in the same location. In this case, it is inevitable to share with other containers and workloads, and the overhead becomes low, while isolation is not sufficient. Therefore, in order to secure the system configuration, it was necessary to configure it in many layers. In case of large public cloud services, they operated users’ containers on each user’s VM for sufficient isolation, which is a less efficient method.

For Kata Containers, it uses a much smaller VM. The size of the VM is MBs, not GBs, and it can be started within 100ms, not minutes. It has small kernel size, small overall size, excellent isolation and security, and less overhead than a complete virtual machine. Another advantage is that it is compatible with existing container tools. It is a technology that runs containers at the lowest level of container technology and can be applied as a plugin to container tools such as Docker or Kubernetes. It has the advantage of using existing container tools and applications as they are. The Kata Containers simply adds an option to run the code at that level.

Q ) OpenStack has been expanding from cloud to telecommunication and IoT infrastructure. What are the areas that are expected to expand in the future, and what are the projects that are closely related to this? (Acrofan)

The “AirShip” project can be applied to various use cases, but in particular, it will be a big help in the edge area. It automates the software stack that runs on the edge and makes it reusable. Also, StarlingX is a more edge-specific project. Both projects are in early stages, but they are likely to be more interesting next year as they mature and the support increases.

Since then, there are examples of using OpenStack environment for machine learning and AI. Hardware support, such as a GPU or FPGA, is intended to support this use case. As workloads from data analysis and others increase, new technologies and the related projects will be developed in the future. On the security side, the technology stack, including virtualization, seems to have a lot of room to improve its security or performance. Kata Containers is a technology that enhances security while reducing the weight of containers. Ultimately, the goal is for users and technology companies to work together to solve many problems in the open source world.

 
▲ The diversification aspect of the cloud has also been an important reason for the change in the strategy of OpenStack Foundation.

Q ) What is the exact meaning of "cloud diversification"?

Currently, users want to use a variety of technologies in the cloud. Using a variety of architectures will be more beneficial in handling workloads in AI, machine learning, and containers. Different types of chips and storage hardware, as well as a variety of hardware and software will be available. From this perspective, cloud and data center-related technologies are diversifying. In terms of edge computing, the concept of the cloud is also changing. There is a demand to utilize the cloud on the edge side, and there is also a desire to diversify the cloud.

Q ) 'Open infrastructure' could be applied to containers and CI/CDs in addition to the cloud. What meaning does 'Zuul' have in an 'open infrastructure'?

Infrastructure is automated and applications running in the cloud are changing, and the driving force behind automation and change is that people want to improve, test, and integrate software and continue to deploy software on the cloud. 'Zuul' allows users to develop software that can run on an open infrastructure as open tools. It is a cloud service that allows you to submit code from any cloud hosted server and whichever community, city or country you are in, and automate testing in the cloud. I think this is well suited to the kind of infrastructure we wanted.

On CI/CD, it was previously possible to implement immediate compilation and testing after committing codes when development is done using automation build method through Jenkins and others. have. However, in some cases, such as in code reviews, automation builds do not work well. In the case of the foundation, the Infrastructure Team created ‘Zuul’ to solve these problems, and the foundation's document site is managed by it. I believe that open infrastructure can include not only technical areas, but also infrastructures actually used in development.

'Zuul' was originally developed by the community for the OpenStack project, but it has also attracted a lot of attention from outside projects and software departments. Moreover, there was a huge demand from the outside, so it was separated into an independent project naturally. In March, when version 3.0 was released, I think it was a good time to separate the project into an independent project. Version 3.0 also supports technologies that are not related to OpenStack, but considered important such as GitHub, etc.

Q ) What do you think about the contributions of Korean communities and companies?

The largest part of the contribution in Korea is the integration of the container into OpenStack. Helm and Kolla are tools designed to facilitate the operation of the container in OpenStack. In this part, Korean companies such as SK Telecom are playing a leading role. By contributing to the integration of the container into OpenStack, the stability of OpenStack is consequentially improved and the upgrades become easier.

In case of Samsung Electronics, it has an NFV-related solution. It puts lots of development and resources to the part that is managed after containerization on a Kolla basis, and solutionizes the part. SK Telecom is also participating in a project that allows OpenStack to operate on container platform by combining with container technology to easily build and operate OpenStack itself. In particular, SK Telecom not only simply progresses development internally but also contributes development part to the community and progresses co-development with AT&T, Intel, etc in the community.

Korea plays an important role in the movement toward open infrastructure, and there are many booths and sponsors in this exhibition than in the past. OpenStack, Kubernetes, and container technologies are combined to provide people with the 'open infrastructure' solution they really want, and I think this shows how much interest Korea is showing in this move. We participate in various events around the world and meet various community members. Here, we found that each country has distinct characteristics. In case of Korea, local leaders are proactive and collaborative, and the introduction of new technology is fast and user-driven.

On the other hand, in the past, there were not many cases where domestic companies were doing business extensively based on OpenStack. But this year, domestic companies are coming up to the position in which they can do business and sponsor with open infrastructure technology. As the scope expands to container technology, the opportunity seems to expand. And this might be the trend seen in sponsors and others of the event this year.

Copyright © acrofan All Right Reserved


    Acrofan     |     Contact Us : guide@acrofan.com     |     Contents API : RSS

Copyright © Acrofan All Right Reserved