Monday, July 8, 2019

New ‘Experience Zones’ Offer a Fast Route to AI Expertise

New Dell EMC AI Encounters Zones showcase the company advantages of artificial intelligence and supply ready accessibility latest Dell EMC AI solutions.

Organizations all over the world now recognize the chance to place artificial intelligence to operate to resolve pressing business problems. In a single manifestation of this growing AI momentum, a current IDC report predicts that worldwide paying for AI systems will jump by 44 % this season, to greater than $35 billion.[1]

This push in to the brave " new world " of AI isn’t limited to simply certain industries. It’s overall, based on IDC. Among the firm’s research mangers notes inside a news release, “Significant worldwide artificial intelligence systems spend is now able to seen within every industry as AI initiatives still optimize operations, transform the client experience, and make new services and products.”[1]

Clearly, with regards to AI, organizations will be ready to seize your day. Here is where things get harder. Since individuals have bought in to the vision, the task would be to turn great ideas into great AI systems that deliver measureable business value. To obtain there, organizations have to gain knowledge about AI applications and also the high-performance computing systems that bring them.



Each one is asked towards the new Dell EMC AI Encounters Zones! These locations for immersive AI encounters give Dell EMC customers and partners an opportunity to obtain a extensive understanding of AI technologies and advancements, in addition to practical, hands-on knowledge about the look and deployment of AI solutions. On the way, the AI Experience Zones show how organizations can leverage the Dell EMC HPC and AI ecosystem to deal with today’s business challenges and possibilities across an array of industries.

The AI Experience Zones, launched together with Intel©, convey a strong focus on simplifying AI deployments. Through masterclass training, AI expert engagements and collaboration possibilities that are offered on-site, users are led through the steps needed to kick-start AI initiatives inside their organizations - including design, installation, maintenance and, most significantly, the delivery of tangible business outcomes.

The Client Solution Center connection


The brand new AI Experience Zones are extra time in our Customer Solution Centers, that are found all over the world. These centers give organizations an opportunity to gain firsthand knowledge about the most recent and finest Dell EMC products and services, together with choices using their company Dell Technologies companies.

Using a customized Customer Solution Center engagement, your business could work directly with this subject material experts within our dedicated labs. Remote connectivity allows you to include global team people within the CSC experience, or to utilize us entirely out of your own location, while you plan and implement your digital transformation strategy - and try to take the suggestions to existence.

Saturday, July 6, 2019

Dell EMC Doubles Down on VxBlock at Cisco Live

The 2009 Spring, Dell EMC reaffirmed its decade-lengthy dedication to converged infrastructure (CI) with the multi-year extension of their longstanding systems integrator agreement with 'cisco'.

In the centre in our CI technique is the VxBlock 1000, a method that gives a real mission critical-reason for hybrid cloud helping customers achieve greater simplicity and efficiency.

This season at 'cisco' Live, Dell EMC is happy to create several bulletins that deepen VxBlock 1000 integration across servers, networking, storage and knowledge protection. Together, these bulletins represent the following key milestone within our dedication to CI innovation and our customers -supported by our strong relationship with 'cisco'.

Here’s a glance at what we’re announcing today:

Realizing the ability and gratifaction of NVMe Over Fabrics


NVMe is essential to unlocking a higher level of cloud operations on CI, however the full business advantage of NVMe are only able to be recognized by having an finish-to-finish infrastructure enabled by NVMe over Fabrics (NVMe-oF).

To assist customers realize the entire power NVMe-oF, Dell EMC is announcing new integrated 'cisco' compute (UCS) and storage (MDS) 32G options, extending PowerMax abilities to provide game-altering NVMe performance over the VxBlock stack. This improves the effective architecture, consistent high end, availability and scalability that’s become symbolic of the VxBlock, allowing you to satisfy the most demanding needs of high-value, mission-critical workloads.



Now, customers can usually benefit from extreme finish-to-finish system performance with one system that may evolve from today’s nanosecond to tomorrow’s microsecond latency.

These new compute and storage options is going to be open to order later this month.

Extending Integrated Data Protection towards the Cloud


Dell EMC developed the idea of integrated data protection to assist customers safeguard different tiers of applications and knowledge efficiently and affordably - with exactly the right degree of protection for every business need.

While legacy data protection “bolted-on” to a different converged system might work, it might not supply the right degree of protection for every service-level need. That is why Dell EMC provides a flexible group of choices for streamlined backup and recovery, data replication, business continuity, and workload mobility to provide reliable, foreseeable, and price-effective availability for Dell EMC converged infrastructure.

Currently, we’re extending our reliable, factory-integrated on-premise integrated protection solutions for VxBlock to hybrid and multi-cloud environments, including AWS. This release, which is open to order in This summer, features choices to help safeguard VMware workloads and knowledge using new cost-effective Data Domain Virtual Edition and Cloud Disaster Recovery software options.

Thursday, July 4, 2019

What Exactly is 5G? Not Just Another G

We are all aware about 5G, but the facts?


5G is just understood to be the 5th generation systems. It isn't yet another G. Yes, this wireless system upgrade delivers data to the cell phones at remarkably fast speeds. But while 5G will indeed make our smartphones faster, it will likewise play a sizable role in the introduction of other sorts of wireless technology including although not restricted to artificial intelligence, drones, IoT, TeleHealth, Autonomous vehicles etc. Uber is the ‘app that 4G built’ what exactly will 5G build? The options are endless because of so many use cases.

The raw speed of 5G originates from using areas of radio stations spectrum which have bigger capacities to encode data, and therefore provide greater capacities. This area of the spectrum also enables bigger bandwidth towards the finish-user device, just like a mobile phone. The space limits of the new mmWave spectrum is resulting in densification of cells i.e. deploying plenty of small cells nearer to the finish users. It enables more users, lower latency and expanded coverage. This rise in quantity of wireless cells is resulting in next-gen wireless radio infrastructure.



In current 4G deployments, radios are set up towards the top of the tower nearer to the antenna along with a separate digital Base Band Unit (BBU) is situated at the bottom of the cell tower. The BBUs are purpose-built embedded platforms that contains DSPs, FPGAs and specialized ASICs to process radio stations traffic and send ethernet traffic upstream. With densification of cells, it's becoming expensive to possess a BBU per cell location. Rather it's leading to a different architecture whereby the majority of BBU processing is centralized serving a bigger quantity of cells. This really is known as C-RAN (Centralized RAN). It will minimal processing of radio signal each and every cell site to lessen the quantity of data that should be delivered to the centralized C-RAN unit. The C-RAN unit could be 20km from the cell sites. This can lead to intelligent methods for identifying distribution of processing between your cell site and also the centralized C-RAN location. 3GPP industry standards group and ITU (Worldwide Telecommunications Union) will work on standards specs with this processing split between cell site locations and C-RAN location.

Centralized processing of radio signal enables simpler transition of radio signal across cell sites as users change from one cell site to a different cell site, known as Co-ordinated Multi-point (CoMP). This signal hands off between cell sites gets to be more important with densification of cells. The centralization of radio processing enables leveraging standard x86 server architecture as compute nodes. Any specialized processing is performed using emerging hardware accelerators (FPGAs, SMART-NICs) that plug-directly into standard servers. This really is resulting in hybrid architecture that contains standard x86 server along with hardware accelerators (FPGAs and SMART-NICs) for top speed processing of network traffic and enabling features like network slicing.

Utilization of standard server-based platforms for C-RAN can also be creating possibilities to construct something delivery platform known as MEC (Multi-Access Edge Compute) where third party providers and consumers can host their applications. Multiple industry collaborative attempts are going ahead to standardize the MEC architecture and be sure inter-operability (see ETSI MEC). Applications that typically ran inside a backend cloud or data center are now able to proceed to the MEC platform to become nearer to network’s edge. The centralized Telco Core services (known as EPC or Evolved Packet Core) may also proceed to the advantage, resulting in a distributed virtual EPC in the Network Edge.

You will find infinite options from Edge to Core to Cloud.


Beginning from the firm foundation of industry-leading server, storage, networking, and platform software, Dell Technologies is spearheading the means by this emerging mobility service architecture of 5G. Using the emergence of Artificial Intelligence and Machine Learning, we're also delivering platforms which allow between one to ten high power FPGAs and GPUs in  server platforms. We're building new components, enabling new hardware and software layers on the market faster and less expensive than your competition, and building deep relationships using the company ecosystem that concentrates on the real objective of 5G - to provide finish users what they need.

At Dell Technologies we're thrilled to become leader within the 5G space which help systems transform. Return in This summer for that second installment in our ‘Not Yet Another G’ series.

Tuesday, July 2, 2019

Taking the Fear Factor Out of AI

For many years, films like Space Journey, Free War Games, Terminator and also the Matrix have portrayed the long run and just what it might be like if artificial intelligence (AI) required around the globe. Go forward to 2019 and AI is rapidly being a reality. The items we simply accustomed to see within the movies are improving our lives so we frequently don’t understand it.

We’ve lived with AI assistance for quite a while. We use Waze and Google Maps to assist us predict traffic patterns and discover the shortest driving routes. We let Roomba navigate our homes and our floors clean. We trust flight operators to make use of auto-pilot whilst in the air, so that they rarely concentrate on anything apart from takeoffs and landings.  Even our data centers are becoming smarter with learning technologies that automate workload discussing, data tiering and knowledge movement.   Each one of these functions require AI and therefore are supplying us positive encounters. And, we're accepting them into our way of life at this type of rapid pace, we currently are starting to anticipate this degree of aided intelligence in the services and products that we interact.



Around the switch side, there are lots of new, broader, more fully autonomous AI applications that actually get in the centre of the items the sci-fi community has exploited to the stage they provide us the creeps. Think robot wars, your government mass surveillance, or even the extinction of mankind.  It’s human instinct to fear the unknown cheap technology fast-tracks innovation quicker than the interest rate that society can alter constantly opens technology like deep learning to the fear mongering.  But, I lately learned first-hands it doesn’t need to be this way with AI which things first viewed as frightening or weird can rapidly evolve as you can see and realize the worth they are able to bring. When you experience value, that factor becomes normal, and just like a drug you would like much more of it. At that time, is how you will see an apparent separation of services and products I personally use individuals which have fully accepted the most recent technology to pivot their offering (think Tesla, AirBNB, Lyft) and individuals which are racing to trap-up.

I lately had the chance to have interaction with Sophia the Robot - the now famous AI-powered robot noted for her human-like appearance and behavior.  Using AI, visual information systems and facial recognition, Sophia can imitate human gestures and facial expressions, answer certain questions making simple conversations on topics she's been trained on. Out of the box standard with AI, she's been made to get smarter with time and gain social skills which help her communicate with humans, similar to other humans would.

Initially when i first ‘met’ Sophia, it had been awkward. I couldn’t stop looking at her.  But, once we conversed, and that i requested her more questions, I had been amazed at how rapidly I adapted to her being a member of our atmosphere. In under 24-hrs, anything I'd felt creepy about when first getting together with Sophia, vanished. I had been talking about her like a person, making jokes together with her, and speaking with her, as though it had been normal. And, it had been.

My point being, AI isn't future searching, it's already a huge part in our lives.  When I find out more about the strength of AI, I should also assist you to, our customers, obtain a better knowledge of how important AI is to your companies. I understand that by experiencing advanced AI firsthand, like I've, you will get new perspectives on what’s possible whenever you turn creepy into awesome to assist humanity and sustain an aggressive differentiation inside your business.

Most lately Dell EMC been dealing with AI thought leaders to demystify AI with this Magic of AI series designed to showcase the ‘Art from the Possible’ using the latest machine learning and deep learning techniques.  This series uses first-hands encounters with advanced AI as the muse to assist spark ideas about how exactly techniques like video analytics, image recognition, and natural language processing does apply for your industry.  For individuals individuals who weren’t in a position to come along for that inaugural event in New york city with Sophia the Robot, I’m happy so that you can share a few of the digital highlights in the experience. You can view my video interview above with Sophia or browse the highlight reel in the primary event in the GMA studios in Occasions Square.  If you like the live, in-person experience, please sign up for our next Magic of AI event in the ABC 7 Studios in Chicago on This summer 23rd with Dr. Poppy Crum, Neuroscientist & Technologist.

Sunday, June 30, 2019

Why Does Hardware Matter in a Software-Defined Data Center?

Today, organizations having a modernized infrastructure (also known as “modernized” firms) tend to be better positioned to deal with emerging technologies than their competitors with aging hardware. Modernized firms can rapidly scale to satisfy altering needs. They do know the significance of versatility, especially with regards to handling demanding applications and processing the insane quantity of data inundating us all angles!

The best software-defined data center (SDDC) solutions might help organizations address individuals heavy demands and accommodate future growth. SDDC breaks lower traditional silos and plays a vital role inside a firm’s data center transformation. Since all elements within an SDDC are virtualized - servers, storage, as well as networking - they are able to easily change and reduce the time for you to deploy new applications.



With all of these benefits, it's no shocker that many organizations see value in SDDC like a lengthy-term strategy. They would like to exist, and know they should be there to achieve success lengthy-term. But dealing with that time is really a journey - and something that has to start with the proper foundation.

Setting the Record Straight


With regards to SDDC, among the greatest misconceptions is the fact that hardware doesn’t really matter. Individuals people in hardware don’t go personally (in the end, it's SDDC, not HDDC). However that mindset couldn’t be more wrong. Getting the best hardware doesn’t just matter, it’s critical. Why? For just one factor, SDDC operates on hardware. This might appear just like a given, but without proper servers in position you cannot do the rest of the awesome items that comes with SDDC. Servers would be the first step toward SDDC, and with no firm foundation? Well, everyone knows what went down towards the guy that built his house around the sand…

To supply a a bit more context, listed here are 6 Reasons Hardware Matters within an SDDC:

  1. Elevated Capacity: Because SDDC operates on hardware, performance is restricted through the capacity and limitations of the servers. You’re made to operate inside the limitations of sources available, and when individuals sources are restricted, your SDDC abilities is going to be, too.
  2. Faster Deployment: A contemporary infrastructure helps in reducing time it requires to deploy new applications. Automation tools for example zero touch deployment make existence a great deal simpler for the IT staff. With aging infrastructure, it will take IT organizations days, days, or perhaps several weeks to deploy new versions of applications within their data centers. Modernized servers assistance to drastically reduce this time around.
  3. Scalability - The best hardware allows you to easier scale to satisfy your altering needs. Modernized servers support data growth, because they provide you with the capacity to include additional sources for example memory. You are able to scale to meet business demands, staying away from infrastructure “sprawl.”
  4. Emerging Workloads - Today’s workloads tend to be more complex than individuals of history. Emerging workloads that need considerable amounts of parallelized computation need modernized servers designed particularly to aid them. In case your organization uses (or intends to use) predictive analytics, machine learning, or deep learning you must have the best infrastructure in position. Research conducted recently by Forrester discovered that 67% of servers purchased within the next year will be employed to support emerging technology workloads including IoT, additive manufacturing, computer vision, predictive analytics, and edge computing.[1]
  5. Customized Workload Placement - Another advantage to modernized servers is the opportunity to personalize your workload placement according to your particular needs and sources. Which means you can run some workloads on-premises (for example data-sensitive applications), and keep others within the cloud. For instance, PowerEdge MX7000, that was designed particularly for SDDC, is really a modular, software-defined infrastructure that may assign, move, and scale shared pools of compute, storage, and fabric with greater versatility and efficiency.
  6. Improved IT Staff Productivity - With aging infrastructure, your IT staff likely spends a sizable chunk of time managing day-to-day tasks. This doesn’t leave enough time to pay attention to strategy or focus on stuff that will lead to overall business results. Modernized servers assist you to automate tasks, which makes it much simpler to deploy, monitor, and keep, so that your staff can also add more quality in other locations.


Your way for an SDDC can be tough, and regrettably the road to make it happen isn’t obvious cut. However if you simply begin with a good foundation, such as the right servers, you will be positioned to evolve and also be to satisfy your altering small business.

Evolution at the Edge

At Dell Technologies World this season, customers and journalists were interested in trends I'm seeing available on the market and predictions for future years. I shared my thoughts about the outcome of 5G , how AI and IoT are ongoing to intersect, and the requirement for companies to possess consistent, flexible infrastructure to rapidly adapt. I additionally emphasized the first step toward each one of these transformations may be the shift to edge computing-and it is our OEM & IoT customers across all industries who're leading this evolution.

Location, location, location


At this time, I ought to clarify what i'm saying through the edge. I’m speaking about data being processed near to where it’s produced, in comparison to the traditional centrally-located data center. I love to consider the main difference between your data center and also the edge because the distinction between living in the suburban areas and residing in the town-where all of the action is. At this time, about 10 % of enterprise-generated information is produced and processed outdoors a conventional centralized data center or cloud. However, by 2023, Gartner predicts this figure will achieve 75 %. That’s an impressive shift by definition.

Three whys


So, how can this be happening? Three good reasons. First, based on the latest research, the amount of connected devices is anticipated to achieve 125 billion by 2030, that will put about 15 connected devices in to the hands of each and every consumer. It really doesn’t seem sensible to maneuver everything data to some traditional data center-or perhaps to the cloud.



Second is cost. It’s naturally more cost-effective to process a minimum of a few of the data in the edge. And third, it’s about speed. Many use cases just cannot accept the latency involved with delivering data more than a network, processing it and coming back an answer. Autonomous vehicles and video surveillance are wonderful examples, where a couple of seconds delay can often mean the main difference between an anticipated outcome along with a catastrophic event.

Edge computing examples


And what sort of compute exists in the edge? Well, it will help me to visualise the advantage like a spectrum. Around the right finish-things i call the far edge-is how information is generated. Picture countless connected devices establishing a constant stream of information for performance monitoring or finish user access. To illustrate a fluid management system, where valves have to be instantly opened up or closed, according to threshold triggers being monitored. If this sounds like something in which you're interested (using IoT data to assist customers better manage and trobleshoot and fix control valves), I suggest searching into our joint solution with Emerson.

Or, consider the way the frequency of fridge doorways opening within the chilled food portion of a store affects the fridge’s temperature levels, and eventually the meals. It might be crazy to transmit towards the cloud this type of lots of of information simply indicating the binary safe/unsafe temperature status-the shop manager only must know once the temperatures are unsafe. So, the advantage may be the apparent option to aggregate and evaluate this sort of data. Actually, we’ve labored having a major supermarket store to apply refrigeration monitoring and predictive maintenance in their edge. Today, their cooling units are serviced when needed, and they’re saving huge amount of money in rotten food. If you are thinking about using data to assist avoid food waste, take a look at our joint solution with IMS Evolve.

Application-driven solutions


Obviously, in most cases, the applying determines the answer. For instance, speed in surveillance systems is crucial, when you are looking for a lost child inside a mall or identify and prevent someone that's a known security threat from entering a football stadium. The final factor you would like in the crucial moment is perfect for a cloud atmosphere to let you know that it is busy searching.

Because of the creation of 5G, carriers are addressing the requirement for greater data traffic performance by putting servers at the bottom of cell towers rather of in a regional data center. All of these are examples where configuration capacity, great graphics and processing performance come up. Which brings me to a different interesting point. When edge computing began, dedicated gateways were the main focus. While still important, that definition has expanded to incorporate servers, workstations, ruggedized laptops and embedded Computers.

The micro data center


Another group of edge compute is exactly what Gartner calls the Micro-Data Center. Most of the features of a conventional data center come up here, like the requirement for high reliability, capability to scale the compute when needed, and amounts of management. Problems that don’t typically demand ruggedized products, but where space limitations are most likely.

During these scenarios, customers typically consider virtualized solutions. Remote oil rigs, warehouse distribution centers and shipping hubs are wonderful examples. Just consider the rate of packages flying lower a conveyer belt in a distribution center, being routed right loading area as the information is being logged instantly for tracking. Batch files will be delivered back to some central data center for global tracking, billing, and documentation. Essentially, you've got a network of micro data centers in the edge, aggregating and analyzing data, while feeding probably the most relevant information right into a bigger regional center.

Finding the Sweet Spot When It Comes to Your Server Refresh Cycle

And that's why the server refresh cycle is really essential for organizations today. Servers don’t last forever, and waiting too lengthy to exchange can lead to downtime and set your core business functions in danger. But around the switch side, should you refresh too early but for the wrong reasons, maybe it's a pricey decision that utilizes much of your IT budget.

How Do We Discover That Server Refresh “Sweet Spot”?


With regards to server refresh, there are many things to consider. Cost, frequently run applications, IT staff, current infrastructure, growth objectives, as well as your plans for emerging workloads all come up. Regrettably, having a server refresh, there's no magical, one-size-fits-all answer. The optimum time to refresh your servers is dependant on your organization’s unique needs and lengthy-term goals. You will find apparent costs connected with modernizing your on-premise infrastructure. But there's also substantial costs not to doing the work. By ongoing to operate legacy hardware, you may be putting your business in danger.



Previously, the typical server refresh cycle involved five years. However that timeline has shifted. Today, it isn't uncommon for companies to refresh on the 3-year cycle to maintain today's technology. These businesses aren’t just refreshing for the it (although we agree that new servers and knowledge center toys Are great) - they’re doing this to satisfy growing demands and strategically position themselves to deal with new innovations for the future. They are fully aware they have to modernize to stay competitive and get ready for the brand new technologies.

Advantages of a web server Refresh


Modern servers are created particularly to deal with emerging workloads. For instance, the PowerEdge MX7000 includes a Dell EMC kinetic infrastructure, meaning shared pools of disaggregated compute, storage, and fabric sources could be configured - after which reconfigured - to a particular workload needs and needs.

Additionally to handling data-intense workloads, replacing servers along with other critical hardware reduces downtime and greatly reduces the chance of server failure. Improved reliability implies that your IT staff spends a shorter period on routine maintenance, freeing them up to pay attention to stuff that increase the value of the company.

Furthermore, newer servers provide greater versatility and provide you with the chance to scale when needed according to altering demands. Some workloads, especially mission-critical applications, would be best operate on-premises, along with a modernized infrastructure causes it to be simpler to evolve and deploy new applications. Research conducted recently by Forrester discovered that Modernized firms tend to be more than two times as likely as Aging firms to cite faster application updates and improved infrastructure scalability.[1]

Modernized servers also allow you to virtualize. By layering software abilities over hardware, you may create an information center where all of the hardware is virtualized and controlled through software. This can help improve traditional server utilization (that is typically under 15% of capacity without virtualization).

A web server refresh presents a significant chance to enhance your IT abilities. New servers enable you to remain competitive and position you for future data growth, innovative technologies, and demanding workloads that need systems integration.

To learn more about the advantages of server refresh, download the Forrester study Why Faster Refresh Cycles and Modern Infrastructure Management Are Important to Business Success or speak to a Dell EMC representative today.

Friday, June 28, 2019

Meet Deep Learning with Intel – The New Addition to the Dell EMC Ready Solutions for AI Portfolio

The brand new Dell EMC Ready Solutions for AI - Deep Learning with Apple accelerates AI insights, optimizes TCO while offering a quick on-ramp for deep learning workloads.

Inside a mission to bring the commitment of artificial intelligence to existence and take advantage of the huge levels of data generated on the 24×7 basis, many organizations are hurrying to drag together the various technology elements required to power deep learning workloads. Today, this quest got a great deal simpler using the launch of the new Ready Solutions for AI according to Dell EMC and Apple innovation.

The Deep Learning with Apple solution joins the growing portfolio of Dell EMC Ready Solutions for AI and it was unveiled today at Worldwide Super Computing in Frankfurt. This integrated software and hardware option would be operated by Dell EMC PowerEdge servers, Dell EMC PowerSwitch networking, and scale-out Isilon NAS storage and leverages the most recent AI abilities of Intel’s second Generation Intel® Xeon® Scalable processor microarchitecture, Nauta open source and includes enterprise support. The answer empowers organizations to provide around the combined requirements of their data science also it teams and leverages deep understanding how to fuel their competitiveness.

Dell Technologies Talking to Services help customers implement and operationalize Ready Solution technologies and AI libraries, and scale their data engineering and knowledge science abilities. Once deployed, ProSupport experts provide comprehensive hardware and collaborative software support to assist ensure optimal system performance and reduce downtime. Furthermore, Education Services offers courses and certifications on data science, advanced analytics and much more.

AI simplified


The brand new Deep Learning with Apple solution simplifies the road to AI-powered applications using the fully featured container-based Nauta deep learning platform that provides a cutting-edge template pack approach that eliminates the requirement for data scientists to understand the intricacies of Kubernetes. Additionally, Dell EMC’s data scientists have built use situation examples for image recognition, natural language processing and recommendation engines to assist customers comprehend the abilities from the solution’s architecture.



Deep Learning with Apple can also be pre-configured using the TensorFlow distributed deep learning framework, Horovod and all sorts of requisite libraries for data modeling. This simplified road to productivity is both simple to setup and simple to use and empowers your computer data scientists to invest time building mixers generate value rather of wrangling by using it infrastructure.

Faster, much deeper AI insights


When the Deep Learning with Apple option would be ready to go, you’re positioned to accelerate model training and testing with the strength of second Gen Intel® Xeon® scalable processors. The following-generation processor is in the centre of Dell EMC PowerEdge C6420 servers utilized in the Deep Learning with Apple solution, and along with the newest software optimizations for TensorFlow and supporting libraries, model training time is reduced. The processor includes new Vector Neural Network Instructions (VNNI) that significantly accelerates deep learning inference workloads with increased efficient 8-bit integer data formats and directions to power through four occasions just as much data as was possible with 32-bit single precision floating point methods.[1]

The answer is integrated using the multi-user open-source Nauta software platform that allows containerized training workloads which ran as much as 18% quicker than exactly the same workloads on the bare metal system. As the organization needs grow, the Deep Learning with Apple solution enables near-straight line scaling, achieving 80% of theoretical maximum performance when the amount of compute nodes is scaled in one to 16.[2]

Enhanced TCO


Finally, the brand new Ready Solutions for AI is excellent from the total price of possession perspective, especially in comparison with cloud and hardware-faster solutions. For deep learning training workloads for example, the 3-year TCO is 24% less on Deep Learning for Apple in accordance with a number one public cloud service, while supplying double the amount compute time (24 hrs each day versus 12 hrs each day), and ten occasions the storage capacity (100TB versus 10TB).[3]

Public cloud AI service costs can differ broadly, and monthly charges could be surprisingly high when accidental mistakes result in runaway processes that consume excessive CPU time or generate massive volumes of information. However, the Dell EMC Deep Learning with Apple on-premises solution provides managers and financial accountants with known and foreseeable expenses, while enabling your business they are driving standardization and control of your infrastructure and knowledge.

Key takeaways


The brand new Dell EMC Ready Solutions for AI - Deep Learning with Apple is a perfect option for organizations searching to leverage container-based environments to operate both single node and distributed deep learning training and inferencing workloads. It simplifies the road to productivity for data science teams also it and delivers better-than-bare-metal performance. And like several Dell EMC Ready Solutions, this solution is dependant on a linearly scalable building-block approach so that your deep learning atmosphere can grow to satisfy your altering needs in the future.

Wednesday, May 22, 2019

Mining for Gold in Worldwide Centers of Excellence

With the ever-growing flood of data hitting today’s enterprises, we’re in the midst of a new gold rush. To twist around a line from a Mark Twain character, you might say “there’s gold in them thar hills of data.” But this is true only for those organizations that can put high-performance computing systems, data analytics and artificial intelligence to work to capture nuggets of business value from streams of data.

So how do you get started down this path? Mining value from business data is, arguably, a lot more complicated than panning for gold in mountain streams. To be successful, you need a clear view of your business use cases, the help of experts who have been there and done it successfully, and hands-on experiences with the tools of the trade.

This is where Dell EMC HPC and AI Centers of Excellence enter the picture. These worldwide hubs for innovation and expertise help your organization jumpstart efforts to put the latest technologies to work in order capitalize on data. The centers provide a place where people come together to experience thought leadership, test new technologies, and share research findings and best practices.

People are a big part of the CoE equation. Our HPC and AI Centers of Excellence cultivate local industry partnerships and provide direct input to a wide range of Information Technology creators. Through collaborative efforts, the Centers of Excellence open the door to the vast know‑how and experience in the community, including that of technology developers, service providers and end-users. Even better, the technology companies in the CoE community are eager to incorporate your feedback and needs into their roadmaps.

Let’s get more specific. In Dell EMC HPC and AI Centers of Excellence, you can gain a closer understanding of topics like these:

  • High speed data analytics that help you discover new ways to process, visualize and predict future needs
  • AI, machine and deep learning expertise, best practices, testing and tuning on a wide array of the latest technologies to optimize results
  • Visualization, modeling and simulation of complex data sets using a range of high powered visual computing solutions across multiple locations
  • Performance analysis, optimization and benchmarking to help you find the right technology for the right application and optimize application performance
  • System design, implementation and operation together with monitoring and I/O benchmarking to help avoid performance bottlenecks, decrease power and cooling needs, and address reliability and resilience issues


Advancing blockchain research at a CoE


For an example of the groundbreaking work being done at Dell EMC Centers of Excellence, look no further than the San Diego Supercomputer Center. The Center provides HPC computational resources, services and expertise to accelerate AI research and discovery in academia, industry and government. At this CoE, professionals from Dell Technologies are working with staff from SDSC, industry companies and academic partners to run a blockchain research lab called BlockLAB.

In this hands-on research lab, participants are developing strategies to explore and implement the principal technologies and business use cases for blockchains, distributed ledgers, digital transactions and smart contracts. Among other outcomes, this research is expected to yield a state-of-the-art, end-to-end solution based on a VMware© blockchain stack in a hybrid cloud environment that leverages Virtustream Enterprise Cloud.[1]

That’s the kind of leading-edge research that takes place every day at Dell EMC HPC and AI Centers of Excellence around the world — from North America and Europe to Africa, Asia and Australia.

Source:-https://blog.dellemc.com/en-us/mining-gold-worldwide-centers-excellence/

Friday, April 12, 2019

Real-World Impact of Smart Surveillance to be Revealed at ISC West


With more than 200 billion connected devices forecasted to be in existence by 2020,[1]l it’s no surprise to find that security, surveillance, and IoT solutions are evolving at a pace faster than the ability of most businesses to adapt. Whether due to rapid technological advancements, overly complex and inefficient systems, changing regulatory requirements, or government initiatives, it’s far too easy for organizations to be left behind. The result—failed or faulty surveillance—not only compromises the company or organization’s security investments but also threatens to jeopardize the safety and security of those they’ve committed to protect.

We’re excited about the momentum Dell Technologies is making to solve these real-world challenges for our surveillance customers aligned with our robust partner ecosystem. Just in the past year, we’ve more than doubled our investments in this critical industry as we work toward our vision to create a safer and smarter world from the edge to the data center to multi-cloud.

At ISC West, we’ll be showcasing our edge-core-cloud-enabled surveillance solutions at Booth #17115, including technical demonstrations and theater presentations on our new IoT Solution for Surveillance and IoT Connected Bundles. Built on the world’s leading cloud infrastructure, Dell Technologies designed the IoT Solution for Surveillance to transform and simplify how surveillance technology is delivered to help businesses improve security, better protect their people, and more quickly realize value from their investments. It’s an engineered, pre-integrated solution that combines validated workloads, hardware (cameras, sensors, etc.), and machine intelligence in a single, cohesive system.

Why do we believe so firmly in this foundation-to-roof approach? As Carrie MacGillivray from IDC pointed out, “Organizations are looking to integrated IoT solutions that bring together the storage, security, network, and management and orchestration. Companies need to find a partner that understands these requirements and can help provide the piece parts to build out a holistic solution. Dell Technologies’ holistic portfolio of key IoT solutions and go-to-market options make them a solid partner for your IoT journey.”

Difficulties with system integration, poor performance, and an inability to leverage back end analytics are resulting in failed surveillance implementations for many—not to mention, the underlying technology, from sensors to AI, continues to evolve at a blistering pace. Our goal is to create new solutions that not only simplify these complex environments, but also tailor the entire infrastructure to each business, readying it for what’s to come while reducing risk and improving efficiencies.

From cameras and computers to storage, servers, and the cloud—Dell Technologies has partnered with top names in technology, security, surveillance, software, and hardware to create the number one name in surveillance and IoT solutions, and the most integral and complete end-to-end infrastructure leveraging orchestration, automation, and virtualization.

Our commitment to this standard of essential infrastructure encapsulates every part of the implementation, including continuous support from the industry’s only full-time validation labs with locations across all major regions of the globe. This reach means that we are uniquely positioned, more so than any other company in the industry, to be more responsive to customer opportunities and partner support while scaling validations. These facilities are entirely focused on supporting, testing, validating, and documenting deployment of surveillance systems that reduce and minimize customer risk and liability, while increasing efficiency and performance at every step.

Our goal is to develop safety and security solutions that transform the industry by bridging security, IoT, and IT to help businesses and organizations around the world scale into the future. Here is an example of an organization that is actively leveraging our essential infrastructure:

“At the University of Southern Mississippi’s National Sports Security Laboratory, we are developing trusted practices for IoT and surveillance to support our forward-thinking 2025 initiative,” said Dr. Lou Marciani, director of the National Center for Spectator Sports Safety and Security (NCS4). “Our goal is to enhance better safety to millions of spectators who attend sports and entertainment events. The new IoT Solution for Surveillance is designed specifically to reduce the complexity of building, scaling, and managing these complex venues. We are excited to have the opportunity to test several case studies that might prove to be game changers for security in the future”