Intel Portfolio: How Is Intel Powering the Smart and Connected Digital World

Intel Portfolio: How Is Intel Powering the Smart and Connected Digital World-feature image
 |   | 

Intel is playing a significant role in bringing digital transformation across organizations. In this article, Shreejith G Krishnan from Intel Corporation is in conversation with Asheet Makhija (Business Head – Techjockey Infotech) and will take us through the vast portfolio of Intel solutions.

Asheet- Hello everyone, my name is Asheet Makhija, I represent SISL Techjockey, and we are here to discuss some important aspects around Intel, Intel’s portfolio and Intel’s advancements in the face of technology and the processing power that it has been generating over the years.

I’m sure we are all familiar with Intel and its portfolio which has evolved over the years, but I think it’s important for us to get the latest context of where Intel as an organisation and Intel technology on the progressive side is.

We are happy to introduce to you Sreejith Gopala Krishnan, partner position architect for National systems integrator and Global systems integrator with Intel Corporation. He has been our go-to person whenever it comes to solutioning and so I can say with confidence that he is the best guy to discuss the topics at hand.

To start with, let’s discuss the Datacenter Portfolio part with you. And the first thing which comes to my mind is about digital transformation. So we all know that digital transformation and everything around digital has really gained pace over the last few years and a lot of organisations are either on the path or have already matured to a fairly large extent.

How Is Intel Powering the-Smart and Connected Digital World feature image

What has been Intel’s play in this aspect of digital transformation for the organization in the last few years and as of today?

Sreejith- Sure, first of all, thank you Asheet, and thank you SISL team for setting up this discussion. As you rightly pointed out, digital transformation is finally entering the mainstream and Intel has been quite aligned towards building the right strategy and enabling the entire foundation.

It has been building technologies that have built the foundation for performance and agility needed to meet the common objectives that we are trying to solve for the customers.

Over the last couple of decades, Intel has employed a holistic strategy and I want to bring to everybody’s notice that it’s trying to build leadership products so that there is continuous innovation in the hardware and the infrastructure space.

Once we have the leadership products in place, it is important for us to build an ecosystem in place and optimize the solutions. Further, you need to work with ecosystem partners to enable it and take it to end customers.

That is why Intel is trying to work with a wide range of partners, be it optimizing and improving the performance of the workloads so that it runs the best on the Intel hardware. That is where it is coming up with this approach of optimizing some of the open-source libraries.

Intel has been a strong contributor to the open-source community. We take some of the advanced workloads like AI or Advanced Analytics that enable this and enter digital transformation.

It has been optimizing some of these AI frameworks be it tensor flows or the pytorch, which are like the most used frameworks in AI Space.

Intel is working with the open-source community to optimize and get the best out of the Intel hardware when you are running or when you are building applications to reach this AI platform. So not just that, Intel is also working with the ecosystem.

Intel is strategically making some acquisitions and investments in the AI space as well, some of which are Mobileye – they are in autonomous driving company, and they develop vision based advanced driver assistance systems. Intel also has Movidius, because we are talking about the X cloud story now.

We are also coming up with these movidius designs wherein you have low power processor chips for the computer vision and the iotic kind of use cases that we are trying to solve at this moment.

Through all these strategic investments, Intel is playing an instrumental role and is driving this entire digital transformation through its leadership products. It is also bringing the ecosystem together to accelerate this entire transformation.

Asheet- Perfect, you spoke about the technological advancements which Intel is doing around the infrastructure side and then you also spoke about the ecosystems.

I mean I can confirm that you are absolutely right because we get a lot of support from intel, then you spoke about all the contributions that Intel makes towards the open source and the community-based development and the strategic acquisitions which Intel has been making.

I think it’s a pretty good 360-degree view or way of looking at the entire digital transformation as it is happening in the industry today. It also means that Intel has gradually and consistently and very globally evolved from being a processor centric organization to a data centric company.

Intel’s Role in Transforming Data Center Technology

Data Centre

So, it would be good if you could just talk us through what’s really happening on the data center side with respect to everything that Intel has been doing.

Sreejith- Sure Asheet, thanks for bringing this point because we’ve been vocal about our transformation from being a process centric company into a data centric company.

Intel’s mission has always been to architect the future of data centers. We build products to cater to the end-to-end solutions scaling from the H computing to the 5G that we are seeing as a trend and of course the cloud and the emerging fields of AI and Advanced Analytics.

Intel has been the leader in the processor aspect or in the computing industry manufacturing processes for almost five decades now. What Intel has done is investing in a new approach to build infrastructure so that we accelerate this entire data centric transformation.

When it comes to IT transformation, it’s not always right to point out just one aspect which is the processor. It is important to emphasize and need to have an end-to-end stack optimized platform and an optimized stack to drive modern data centers. When Intel is building these infrastructure products for the data center, it is trying to build products for three different segments.

  1. To move data faster
  2. To store more and more data
  3. To process this data as fast as possible.

In the last few years, we have seen an explosion in the amount of data that is being generated and which amounts to kind of an explosion in data traffic as well. The network traffic is quite high in the data center, be it east-west or north-south. So, in that aspect as traffic grows, connectivity can become a bottleneck.

To completely utilize and unleash high performance compute, Intel is bringing in a lot of products in the network portfolio and it has also acquired a company called ‘Barefoot’ – a leader in manufacturing programmable switchers and leveraging the DPDK – Data Programming Developing Toolkit to accelerate the entire Software-defined networking and moving the data as fast as possible.

Now that we have taken care of the network aspect, what is next is to store as much data as possible and store it in a very fast medium which accelerates data access and reduces the latency. Intel is innovating in a lot of new memory technology – ‘Optane’ and through the transistor less technology that Intel offers in the memory segment.

Intel has been innovating over the years and it is one of the founding members of NVME, USB as well. Likewise, we have this new technology called Optane as well that is brought up in the memory and storage space which is solving a lot of issues that we see in day-to-day operations at the data center.

Lastly, we need to process that amount of data, for Intel is actually building broad portfolio products from the CPU to the XPO (which refers to processing anything and everything). And you will definitely agree that at this point, we are building stacks or building software which can get processed with the CPU, with the GPO or with an FPGA or other custom based assets. It is important to notice that these different kinds of hardware make your developers’ life very difficult when you are developing products to cater to these different levels of hardware.

So, Intel is coming up with a new strategy called one API – where you can seamlessly move your application across different software or hardware platforms to leverage the best of these architectures in the place.

In this way, it is evolving from a processor centric to a date centric company by involving the latest and the greatest of hindrances in the products. It is coming with this one API strategy into kind of building the best application for your data centric workloads.

Asheet- Perfect. So, we can sum up that the first and most important thing is the amount of data that is transferred, second is the storage which is required for the assembly, and third of course is the processing.

Which gets me to the next question around the data which is being created or which is being replicated?

Last year in 2020 there was an excess of 64 zettabytes of data which was in some form created or replicated, which also means that much of the load which is going on the infrastructure is the most important/ expensive component of any infrastructure is the DRAM.

How Is Intel Addressing the Cost Challenges?

Intel Addressing the Cost Challenges

How is Intel optimizing and addressing the cost challenge in the overall portfolio?

Sreejith- DRAM is quite expensive and is one of the most expensive components as well. When we are trying to consume or process the amounts of data that is getting generated, memory plays a very vital role, and it is very important that we address this issue. For that, Intel has come up with new technology – “Intel Optane”. The new memory technology is your answer to the problems that you just stated wherein we need higher memory capacity but with not so high TCOS.

Optane offers products like firsthand memory, Optane persistent memory and Optane solid state drives as well.

I am proud to say that Intel has been innovating the memory and the storage space. With Intel Optane in place, it can increase the overall utilization and optimization for your data center. When you really see that CPU utilizations are quite low and the rewrite ratios that an application requires to meet that kind of optimum performance for your memory intensive workloads, Optane plays a very significant role.

Optane is coming up with products in two different segments.

DRAM like product called for an optane persistent memory and with that in place what you see is we don’t try to kind of negate the complete DRAM story out of the picture of the data center area. When we are trying to use Optane in conjunction with DRAM, you are trying to increase the memory density in your overall data center so that you can utilize the CPUs in a better way. One of the main aspects which we have seen so far is your processor is quite fast and it’s not always that you get the best out of your CPU because it is mostly sitting idle.

So, it is very important for us to understand that we need to have the right capacity of memory closer to the processor to utilize it better. Optane persistent memory plays a very vital role so that you rack stack more memory closer to your processor and not compromise on your TCOS. It’s not something when it comes to improving your performance, you don’t want to bear unlimited costs. So, we have kept a check on that.

Now, how Optane is different from the other technology that you see in the market.

There is something called a 3D X point storage video which is different from the memory design that you see in transistor-based technology which is used in the storage or SSD market at the moment.

The used cases that we can relate to when it comes to Optane technology is Virtualization; because it is something where you can talk about density and improving the utilization of your overall data center.

And when it comes to a data center, we normally look at the cost from the server perspective so that is not exactly the cost that we need to kind of only look into because when it comes to procuring servers you kind of take care of the licensing cost, operational cost, data center cooling and other aspects as well.

So, it’s important to take the narrative of the end-to-end data center into consideration. That is where going from a scale out architecture to a scale up architecture is very important to address, and in that aspect Optane memory plays a major role.

Also, for in memory databases like SAP HANA, where you need to load the entire data from the capacity or the SSD drives onto your memory to get processed, Optane memory offers you that extra amount of memory with lesser TCOS and higher data persistence.

DRAM is volatile, and persistent memory is nonvolatile in nature. So that extra bit of advantage you will get with Optane memory.

Asheet- Got it, thanks Sreejith. Can you give us a roadmap so as to understand the vision of the entire data centricity that Intel has. You know how that is going to move forward because it will keep evolving and Intel is really speaking one of the defining technologies in the industry.

Intel Roadmap: The Way Ahead

Intel road map

And so, you are at the forefront of how things are going to move, so just a quick preview of the vision and whatever you can share from a roadmap perspective?

Sreejith- One of the great things which is happening is it’s continuously innovating. Having said that, we have a very strong roadmap for our data centric portfolio.

I want to touch on some of the leading products that we are coming up with:

  1. We have 3rd gen processors already on the market and we also have our 4th gen processors coming next year as well.
  2. In the Optane space, we have had 100-200 series so far and we are also coming up with our next channel optane memory.
  3. We also have products lined up in our GPO space as well, which is going to be a very substantial product enhancement that we kind of bring into the vector workload segment.
  4. Also coming with a product called IPO belonging that will give you that accelerator for all your advanced workload needs when it comes to 5G or some of the other accelerator applications.

So, we have an end-to-end portfolio be it from the Edge to Cloud catering to multiple workloads needs, be it in the processor segment or the GPOs or the memory. In that way we are strongly aligned with a strong roadmap in the years to come from Intel’s perspective.

Asheet- Thank you very much Sreejith, it was a great conversation around the data center portfolio. I really appreciate your time for this conversation, and it was quite enlightening.

SECOND SERIES

Now, moving onto the 3rd Gen Xeon – ICX launch. Please tell us which is the latest Xeon processor which has been launched for the data center workloads, and service segment, and stock up on its key features.

Sreejith- Sure, thank you Asheet once again for having me here. So, to touch up on what Xeon’s are – as most of you all know Xeon’s scalable processors are designed for data center workloads.

We launched our first Xeon processor in 2017 and we are proud to say that we have shipped over 50 million Xeon scalable processors to customers around the world. So having said that, we recently launched our latest 3rd Gen Xeon processor which is named Ice Lake and is mainly launched for mainstream 1 and 2 socket service segments

So, Ice Lake is actually taking over our previous 2nd gen processor which is Cascade Lake. To emphasize a bit more what Ice Lake has to offer, it is the first Xeon processor which uses our 10nm processing technology.

We have brought in a lot of advancements like developing a scalable and balanced architecture that we are trying to bring in with the Xeon processor family. Along with that in place specifically to Xeon’s scalable processors – Ice Lake, we have increased the number of processors. We have started supporting up to 40 cores per single processor in this release. We have also brought in a lot of improvements to the instructions per cycle as well.

We have brought in about 20% instruction per cycle improvement through this release which kind of directly translates the performance of the workloads across the wide variety of spectrums that we are trying to solve at this moment.

Also, some of the other hardware improvements that we have tried to bring in through this release are increasing the memory channels. So, as you all understand, the need for memory is continuously growing with the IT transformations that we are witnessing at this moment. So, we have improved the hardware as well to support more and more memory channels.

We have improved from 6 to 8 in this release, and we have also added support for the PCIe gen 4 as well through this release which adds 2x the bandwidth compared to the previous PCIe gen 3 versions which are out in the market.

These are some of the improvements in the hardware that we have brought in. Apart from that, there are some key differentiators when compared to the competition in the market. Ice Lake brings you the other accelerators – the only processor in the market which is coming up with built-in AI acceleration. So having said that AI is one of the key workloads that we are trying to see in every segment to solve customers’ problems.

GPU is something that is significantly used for any AI workloads. Intel is employing this strategy of not using GPU in all the AI workloads. So, Intel is coming up with this AI accelerator built into the processor, so you don’t have to pay more to take care of the processing of the vector workloads.

This processor can take care of your general-purpose workloads and also the advanced workloads be it AI through the deep learning boost and the other accelerators. These are in-built to the Xeon’s processor and also the AVX-512 accelerators.

We are trying to build and improve it continuously from gen to gen and these are some of the key differentiators on why you should choose Intel Ice Lake processors. Especially, when it comes to these niche technologies be it AI, Analytics, or Network workloads which are continuously growing in the Data center segment.

Asheet- These were the improvements that are coming up in the number of cores, the processor, the instructions per cycle, and the memory channels. Also, how it can be used for the general-purpose workload and vector workloads, especially the entire AI and Advanced Analytics workloads that are there.

If we were to draw parallels with the previous Xeon generation processor, how is the performance getting enhanced as compared to the previous gen processor?

Sreejith- Surely, it’s a fair question. When we are trying to come up with these new products, there is always this question of “What’s more that we get?” and “Why should one go for the latest processor?”.

If you compare our first and our last five-year journey in the processor industry and the processing segment, we have seen almost 3.5x gen on gen performance over a five-year span. So, if you are comparing your Broadwell processors that we had then Skylake then Cascade Lake, and Ice Lake. Comparing our five-year generation to what we recently launched there is a 3.5x improvement.

Not just that, if we take Cascade Lake, within that itself we are seeing almost 1.46x gen on gen improvement in the performance. How we are achieving this is by virtue of the 20% instruction per cycle improvement that we have brought in. The enhancements in the architecture like memory channels boost we have brought in from 6 to 8. The support for the latest Optane persistent memory product which is obtained 200 series Barlow pass. Apart from that support for PCIe gen 4 as well, which is a 2x improvement in performance compared to our previous generation.

So, you now have more PCIe lanes to support the gen 4 PCIe-based SSDs, the network, and ethernet cards that you want to plug into those PCIe lanes. So, in that way we have made significant improvements from our previous generations. Also, we have tried to bring in a lot of accelerators (apart from the AI accelerators that we spoke about) to take care of your security needs. Also, Crypto accelerators are quite significant in today’s world as we were all working from home amidst this pandemic.

We are all connected with VPNs which adds a lot of loads to processing your security-related applications. That is where Intel Ice Lake has brought in this accelerator to improve the performance and not just the security. AI even for a broad range of workloads including general purpose drives have seen almost 70% improvement from gen on gen compared to our previous generation processors

Asheet- So it is very interesting that there has been a 3.5x improvement over the last five years and I think this trend has continued for many years for Intel. Thanks for the clarification on the instructions per cycle, memory channel boost, and the accelerator, especially around the AI and the security.

So, you mentioned where the entire Edge to Cloud story and the entire movement from Edge to cloud for various organizations. So, how does Ice Lake fit into this entire initiative of Edge to cloud? How does the customer gain confidence that this is the way to go when it comes to the roadmap from Edge to the cloud for an organization?

Sreejith- You pointed out right, it is important to improve the products to have a narrative closer to the Edge to Cloud because when it comes to the requirements for each segment, the Edge to Cloud requirements is quite unique. When it comes to the Edge based servers the requirements are quite different from the Enterprise data center requirements.

So, when you’re choosing a processor for your day-to-day needs for running the edge-to-cloud story there are three main factors that we take into consideration, the 3 Ps:

  1. Price
  2. Power
  3. Performance

Ice Lake offers the data center portfolio for all your enterprise data center workloads. So, it makes it as flexible as possible to take care of your general-purpose workloads, AI workloads, 5G, or network-related workloads. So, in that way Ice Lake plays a vital role in solving most of the problems pertaining to data centers and even for some of the aspects of 5G and HPC as well.

Intel has this flexibility through Intel’s speed select technology which is like having multiple profiles built to a single processor. So, when it comes to meeting your workload requirements, some may need high priority cores, some may need low priority cores, some may need high frequency some may need low frequency. So, it is not always easy to procure servers for each of these segments.

Intel has brought up this speed select technology wherein a single server with a single processor can kind of differentiate different profiles by grouping these cores together for different workload requirements. This is the flexibility that Intel offers which is not available with any other processors offered by the competitors in the market.

Intel is building the foundation for a true hybrid and a multi-cloud scenario with consistent performance and seamless migration as well when it comes to virtual machines and the global scale as well. So, with all these aspects of bridging the edge to cloud in kind of innovating in the hardware and connecting the software, hardware. With all the partners together, Intel is driving flexible performance from edge to cloud.

Asheet- Thank you very much, I think this was a great conversation around the latest Xeon processor which is the Ice Lake and I really appreciate you taking the time to do this.

THIRD SERIES

Now, we are going to discuss “Intel’s strategy around security and some of the performance advantages” which are there with Intel. Let’s discuss the security aspect of Intel strategy and Intel architecture.

So Sreejith, why don’t you just take us through Intel’s data center security solutions which are there and the overall strategy that Intel has around security, because it has become even more important in recent times!

Sreejith- Sure Asheet, it’s right we can’t stress the importance of security in this scenario. Intel is focused on protecting the data throughout its phases. There are three different phases when it comes to protecting the data

  1. Data in motion
  2. Data at rest
  3. Data in use

Intel says that security is good as the layer below it. It does not matter how secure most of the apps are in the world but if the platform is compromised or the underlying storage space or the operating system is compromised, it can easily spoof up the ladder. You can think of what can happen to your application. So, this is where Intel is bringing this strategy of protecting data in motion, in use and at rest as well.

Coming up with this innovative technology in the security aspect is to start security at the lowest layer which is silicon. It is one of the most granular levels that you can get down to protect your underlying platform, so we are trying to protect the platform by establishing a chain of trust which provides you with that extra bit of resilience.

When it comes to catering to the day in day out applications need, for that some of these solutions or important security features that we are coming up into the platform are:

  • Intel SGX – It is a key feature that started supporting Intel SGX from our E series processes that we had earlier. We have improved significantly to provide you secure enclaves to kind of protecting applications at the most granular level. So, it protects an application that is executing in memory, it protects that memory wherein your application is running. So, it provides you that granular level protection against the most sophisticated attacks like your buyer’s attacks or firmware attacks.
  • Total memory encryption – This feature provides bulk encryption of the entire system memory to protect against all the physical attacks which are happening in the market. It ensures that all the memory accessed from Intel’s CPU is completely encrypted including your customer credentials, encryption keys and the other IPR personal information on the external memory verse so all of this is protected through the TME functionality.
  • Intel Homomorphic Encryption – It is a toolkit designed to provide a well-tuned software and hardware solution that boosts the performance of HE-based (Homomorphic encryption) solutions on the latest platforms.
  • Platform Firmware Resilience – It stops the CPU and BMC from receiving power until the signature has been verified completely. So, it builds that trusted platform through firmware resilience through building that foundation through monitoring and filtering out all the malicious activity that is happening on the system base.

So it is that level of security at the hardware level that Intel is trying to bring in and not to kind of improve the computations because security is computer intensive workload. When you try to bring these new features into the processor you are taking a lot of meat out of your processor.

If in case you are running your general-purpose workloads along with these highly secure applications you may face problems like noisy neighbors wherein you are taking all the cores out. Giving it to your secure applications and your general-purpose workloads will take a hit.

In order to avoid that Intel is coming up with the “Cryptographic acceleration” which is available with our 3rd gen Xeon scalable processor. Wherein these crypto accelerators along with the software innovations provide great performance improvement for the most widely used cryptographic algorithms. It also provides you that fast and strong encryption and decryption along with AVX-512 that we already have in the processors now.

So, this is the whole network or crypto story that we want to bridge through our Ice Lake processors and other exigencies.

Asheet- Fantastic, I really loved this entire thing because all this has been an eye opener for me. We are fed and armored with the fact that there must be application-level security, data level security and physical security. We probably don’t think of what is required at the silicon level and you clarified brilliantly that the security is as good as the platform, as the layer below it.

I think this entire aspect of protecting at the most granular level and graphics around total memory and then bulk encryption, platform firmware resiliency. I think these are some of the things which are very interesting so great stuff is being done by Intel around this.

I was browsing and came across confidential computing and the way Intel looks at it. It is more of isolation of data between various privileged portions of a system which brings me to the next question of

What is Intel doing around confidential computing?

Sreejith- Sure. So confidential computing as you said is gaining momentum like anything. Along with that in place Intel is a founding member of the Confidential Computing Consortium and having said that we did touch up on the SGX as technology.

Intel SGX is a key technology which is contributing to improving confidential computing and this technology is already deployed by a lot of cloud players as well. So, if I must take an example here – Azure Microsoft; it offers confidential computing as a service leveraging the intellisys technology.

So, Intel SGX is a key ingredient in the confidential computing that many of the cloud providers are offering at this moment and this is available on Intel platform. What it offers is hardware-based memory encryption that isolates your specific application code and the data which is available in the memory. It allows user-level code and can allocate private regions of memory which we call “enclaves”.

For example, if you are running the hypervisor with about five applications and you have two applications that are super critical to you out of these five. Imagine the built-in platform wherein your application is very secure, but the other applications are not. If those applications which are not secured get attacked, they can easily sneak up the attack on the other highly critical applications as well.

So, it is critical to address these aspects and that is where SGX offers you that private enclave or private memory regions which are completely isolated from the other applications which are operating in the same memory. So, in that way we are helping the highly critical applications protect against software attacks be it even at the operating system level or at the driver level, or at the bios level.

It will help you prevent attacks against the memory bus snooping attacks and other sophisticated attacks like the cold boot attacks against the memory contention RAM. So, at these levels, SGX provides you with protection by offering you the smallest attack surface.

So, some of the malware that kind of subverts any other software component, can be completely avoided by using the SGX technology. It offers a processor reserved memory kind of thing by protecting the platform from all the other applications which kind of comprise your underlying platform.

So that’s how Intel is playing a very key role in confidential computing, and we are slowly improving this and with time. I am sure this will get translated into a very good story for you to take it to your end customers.

Asheet – Ok thanks Sreejith. And again, this was a good explanation that you gave around security and strategy.

So, are you sure this is something which not only Intel is focusing upon and all the SI’s are also focusing on this, whether its National or Global Si-set you work with.

It’s obvious because customers are focused on it so we all better be focused on it, and this is something which is the need of the hour.

As SISL and Techjockey we are taking a lot of initiatives around security, and we have got a pretty strong team around this. But I want to understand what is really happening in all the SI space you work with for both the applications which are being developed to the solutions which are being deployed, proposed and the discussions that you had. So just take us through that it will be enlightening for us.

Sreejith- Surely, it is good that you brought this up because like we stressed, security is one the most important aspects. When we are trying to build the solutions, it is important to notice that we leverage these features which are available at the platform.

For that, we need to have the right SI partners, the right ISV ecosystem partners, and the s as well. So, we can only take this to the end customers as a package, so it is important that we bring this entire narrative together to address the challenges that customers are facing at this point.

Intel has been working a lot within this space and is leveraging packing the SGX technology to kind of provide you that confidential computing. Also, to provide improved security for your finance-based industries and likewise the leg partners like Azure Microsoft who are offering confidential computing as a service and Alibaba Cloud as well, they are also leveraging the SGX.

So, there are some of the key partners that we are working together with, and I am sure more and more partners will sign up for this. We look forward to partnering with SISL as well in future to maybe come up with something, maybe confidential computing as a service or maybe some improved secure virtual machine as a service.

So, these aspects are something that we can work together on going forward and we look forward to such a conversation as well to some of the enterprise customers and leading customers as well.

Asheet- Great, thank you and I think we’ll be glad to have such a conversation with you. It’s good to know that there is a lot of momentum which is there around Intel’s security solutions with these ISP’s.

So just to round up all the discussions around Datacenter solutions in the first one, then the 3rd gen Xeon ICX launch in the second one, and now around security. So, if we just package all of these together and look at how the performance has improved for Edge to cloud for all the customers with all these Intel offerings. Could you just summarize all this and give us a snapshot of everything that we have discussed.

Sreejith- Definitely so, going back to one of the top questions that you had on transformation from a process-centric company to a data-centric company we can actually touch on that. When it comes to any workload’s performance right, it is important to understand that the workload’s performance just not necessarily defined by the speed of the processor or the frequency of the processor which is operating so it is not just limited to that we need to have a high-speed processor.

It has to work in conjunction with the right amount of memory as well as the right amount of storage and network as well in place so as to have a high-performance workload.

So, with that aspect it is important to note that they need to build an optimized server or an optimized platform that can help the workload in a holistic way with respect to storing data, processing data, moving data. So, all these aspects have to be stitched together in order to provide the actual performance story to your data center workloads which kind of translates that into building the best applications for your modern data centers.

So, this is how I want to summarize the entire performance story that Intel is trying to build, and we look forward to our collaboration as well in the journey together.

Asheet- Thank you very much Sreejith for yet another great discussion. It’s really heartening to know that Intel for so many years has been at the forefront of technological advancements and still is and it’s going to continue for a long long time to come as long as we can foresee.

Recommended Products

Subscribe to get the latest offers, news & updates.
No spam, we promise