Tuesday, August 6th, 11:00 am
Jeff Ohshima is a member of the technology executive team at Toshiba Memory, where he focuses on SSD development and applications engineering. He was previously VP Memory Technology Executive at Toshiba America Electronic Components focused on flash memory with an emphasis on SSDs. He has also been Senior Manager R&D in the advanced NAND flash memory design department, responsible for 70 nm, 56 nm, 43 nm, and 32 nm part design. He has worked on memory at Toshiba for over 30 years, including 20 years on DRAM where he acted as a lead design for application specific memories and did technical marketing. Ohshima has served as a Visiting Research Scientist at Stanford University. He holds a BSEE and MSEE from Tokyo’s Keio University.
Jeremy Werner is senior vice president and general manager, SSD business unit, Toshiba Memory America, Inc., where he leads the team focused on defining, promoting, supporting and delivering solid state drives and Shared Accelerated Storage software that advance enterprise transformation, enable cloud infrastructure, and provide outstanding user experiences in PCs, embedded systems, automobiles, and consumer electronics. Jeremy has nearly 20 years of experience in the memory industry, he was previously VP Sales/Marketing at Tidal Systems, a developer of flash controllers acquired by Micron. He has also held marketing management positions at Seagate, LSI, and SandForce. He holds 23 patents in storage technology. He earned a BSEE at Cornell University.
The flash industry has recently produced game-changing innovations in density, latency, and form factors resulting in large cost-performance benefits. To address the wide spectrum of storage demands coming from phone/IoT devices, mobile compute, and data centers, new flash architectures are essential to handle next generation applications. Challenges have emerged in satisfying the demands of on-premise and cloud data centers while also addressing the data needs of consumers, mobile workforces, and organizations depending on high speed access to information assets. Future technology must include not only new architectures and more layers in flash chip designs, but also a roadmap for QLC flash and beyond, new classes of NVMe SSDs, and new software technologies. They must all come together to enable and accelerate the next wave of applications including real-time analytics, AI/ML, high-performance computing, IoT, and virtual and augmented reality.
Tuesday, August 6th, 11:40 am
Dr. Siva Sivaram is Executive Vice President of Silicon Technology and Manufacturing for Western Digital, responsible for the company’s industry-leading NAND flash memories and other memory and storage technologies. Sivaram has more than 35 years of experience in semiconductor technology and manufacturing. He has held executive positions at Intel, Matrix Semiconductor and at SanDisk after its acquisition of Matrix. Additionally, he was the founder and CEO of Twin Creeks Technologies, a solar panel and equipment company. Sivaram serves on the board of directors of the Global Semiconductor Alliance and the US-India Business Council. He has been on the board member of several start-up firms and was entrepreneur-in-residence at Crosslink Capital and XSeed Capital. Sivaram received his doctorate and master’s degrees in materials science from the Rensselaer Polytechnic Institute where he has been elected to its Board of Trustees. Additionally, he is a Distinguished Alumnus of the National Institute of Technology, Tiruchi, India, where he received his bachelor’s degree in mechanical engineering. Sivaram has published numerous technical papers and a textbook on Chemical Vapor Deposition and holds several patents in semiconductor and solar technologies.
Christopher Bergey is Senior Vice President of Devices Product Marketing and Management at Western Digital responsible for the Data Center, Client Compute, Embedded and Mobility product offerings. Bergey leads Western Digital’s devices product portfolio, from definition and strategy to concept and customer acceptance. Previously, Bergey was Vice President for Embedded and Integrated Systems (EIS) at Western Digital, focused on developing and driving product strategies in mobile and compute, as well as new markets including automotive, connected home, smart city and industrial IoT market segments. In this role, he led industry adoption of TLC (three-bits-per-cell) into the mobile market. Prior to joining Western Digital in 2014, Bergey held multiple executive positions including VP of Marketing at Luxtera, VP of Mobile and Wireless at Broadcom, and Director of VLSI marketing at the Multilink Technology Corporation. Through this experience, Bergey has uniquely spent his career driving connectivity innovation in cloud datacenters as well as mobile and embedded markets. Bergey holds a Bachelor of Science degree in Electrical Engineering from Drexel University and a Master of Business Administration in Finance and Economics from the University of Maryland. He is a frequent author and speaker on topics such as IoT, mobility, and the implementation of data-centric memories and architectures to address the Zettabyte Age.
In order to meet the explosive growth of data, flash memory manufacturers must continue to advance technology. The introduction of the charge trap memory cell helped enable fast and high-endurance SLC, brought TLC into the mainstream and is now leading to the introduction of QLC. However, the real potential of QLC and other forms of high-density storage such as Shingled Magnetic Recording (SMR) hard disk drives is not realized in today’s data center. A new, data-centric architecture is needed to address the growing complexity of workloads, applications and AI/IoT datasets. It must involve multiple tiers of purpose-built compute and storage, as well as new approaches to system software. Zoned-block storage allows data to be intelligently placed and sequenced. This enables lower TCO and optimizes the use of emerging high-density storage, without sacrificing performance. In the Zettabyte Age, architects need to explore innovative approaches to data center architecture to unlock the benefits of the next generation of storage.
Tuesday, August 6th, 1:50pm
Hongsok Choi is VP of NAND Development and Design, SK hynix. Over the 26 years of his career, Mr. Choi has designed and developed various SRAM, DRAM and 2D & 3D NAND flash products. Currently, his responsibility focuses on developing the next generation of 3D NAND. Mr. Choi holds an M.S. in Electric & Electronic Engineering from KAIST (Korea Advanced Institute of Science and Technology).
Andrew (Hyonil) Chong is Senior VP and Head of SK HMS Korea, where he is responsible for the development of ASIC, FW and hardware for Mobile (eMMC, UFS, MCP) and SSD NAND solution products. A 27-year veteran of the storage business including HDD, HBA, and SSD, Mr. Chong has worked on developing SoC controllers at renowned companies such as Cirrus Logic, LSI, and Marvell, as well as a startup. Mr. Chong holds a degree in Electrical Engineering and Computer Science from the University of California at Berkeley.
3D NAND has been very successful, but further advances are essential to meet the demands of cloud providers and new technologies such as 5G, AI, and IoT. One promising approach is to use charge trap flash (CTF) and periphery under cell (PUC) technologies to create a fourth dimension. It allows for significantly higher densities, thus reducing cost and maximizing value. In particular, it leads to a seamless transition to the next generation of flash technology that can replace conventional storage in many applications. 4D NAND solution in combination with the next generation high performance interfaces leads to major improvements in capacity, performance and reliability. Additionally, it will accelerate the adoption of NAND based storage considerably by offering new cost and performance levels.
Tuesday, August 6th, 2:20pm
Vladimir Alves is co-founder/CTO at NGD Systems, where he focuses on developing SoCs that implement intelligent storage technology for data centers and fog computing. He has developed SoC-based solid state solutions for over 10 years, during which his teams have released many enterprise SSD controllers. Before co-founding NGD Systems, he was Sr Director of SSD SoC Development at Western Digital and STEC. He is the author of over 30 scientific publications on subjects including SoCs and computer architecture, and the co-author of over 20 patents. He earned his PhD in microelectronics from the National Polytechnic Institute of Grenoble, France.
Scott Shadley is VP Marketing at NGD Systems where he leads marketing, product management, and product development for the company’s industry-leading computational storage. He has been a key figure in promoting computational storage, being co-chair of the SNIA Technical Working Group on the subject which he helped found, and speaking on the subject at Open Compute Summit, Flash Memory Summit, NVMe Developer Days, and many other events, press interviews, blogs, and webinars. Before joining NGD Systems, Scott managed the Product Marketing team at Micron, was the Business Line Manager for the SATA SSD portfolio, and was the Principal Technologist for the SSD and emerging memory portfolio. He launched four successful innovative SSDs for Micron and two for STEC, all of which were multimillion dollar sellers. Scott earned a BSEE in Device Physics from Boise State University and an MBA in marketing from University of Phoenix.
Everyone wants to get the greatest possible value from the enormous amounts of data they are now collecting. Not only does this require tremendous computing power, but it also requires easily scalable solutions that can handle even more data in the future. Computational storage is a key part of the answer. It brings compute to the edge where the data is stored. This both avoids the need to move large amounts of data around and reduces the strain on facilities such as central processors, networks, and other systems and devices. And scalability becomes simple! All you do is add more storage units (which you obviously need anyways), and the compute power increases automatically. A use case involving in-storage machine learning shows particularly promising results. It makes the training stage orders of magnitude faster than established learning models. Computational storage can thus meet the challenges of data analytics, AI/ML, and high-performance computing in edge, cloud, and hyperscale datacenters.
Tuesday, August 6th, 3:00 pm
Steven Eliuk is Vice President Deep Learning, Global Chief Data Office (GCDO) at IBM, where he leads the development of platform and infrastructure components for machine learning and deep learning in the Cognitive Enterprise Data Platform, IBM’s data lake. He also is focused on applying DL in the enterprise to accelerate the use of cognition in internal processes while maintaining governance, security, privacy, and trust. His work has both generated revenue and showcased cognition at IBM scale to clients. Before joining IBM, Steven led the design of high performance computing (HPC) infrastructure for artificial intelligence and launched the first model parallel distributed training framework for HPC at Samsung Research America. Steven earned a PhD in Computer Science from the University of Alberta. He has presented at many events, including IBM Think, IBM CDO Summit, NVIDIA GPU Developers Conference, IEEE, and more.
Improving the use of data is a top priority of business organizations around the world. A key aspect is to develop an enterprise information architecture that provides an effective AI solution. Data must be accessible, trusted, and ready to be analyzed by AI algorithms. Data of every type, regardless of where it lives, needs to be part of the AI journey. We must provide an AI-friendly infrastructure that includes hybrid cloud, multi-cloud, virtualized, and containerized environments. Machine learning can then be applied to the results for a competitive advantage.
Data management challenges ensue with the ingestion, aggregation, and siloing of data. The problems are compounded by multiple copies and potentially stale data that can detract from the AI environment. Architectural challenges can affect security, protection, and global accessibility to data assets.
The challenges can be met by classifying and preparing data sets, implementing AI data workflows. and deploying AI models capable of meeting service level objectives. See how real world customers are executing AI transformation and applying best practices to successfully leveraging flash memory for AI performance optimization and employing innovative techniques to increase business value.
Wednesday, August 7th, 11:00 am
Alper Ilkbahar is General Manager of Data Center Memory and Storage Solutions and Vice President Data Center Group at Intel, where he led the introduction of the world’s first persistent memory to market in 2018. Before taking on this position, he oversaw the commercialization of new non-volatile, high-performance memory technology at Matrix Semiconductor and SanDisk, as well as holding design engineering and management roles in Intel’s microprocessor division. A 27-year veteran of the semiconductor industry, he earned an MSEE from the University of Michigan and an MBA from the Wharton School University of Pennsylvania. He holds over 50 patents in semiconductor process, device design, and testing and has published multiple conference and journal papers.
New and increasingly important data-centric workloads, such as real-time analytics, AI/ML, VR/AR, IoT, HPC, and cybersecurity, demand tremendous throughput at a reasonable price. The current memory/storage hierarchy of DRAM and flash cannot do the job alone. A new tier is needed that is persistent, high-throughput/low-latency, production-proven, low-cost, scalable, and simple to integrate into existing designs. Such a tier can provide a tremendous speed boost at an affordable cost. Use cases are already available for a wide variety of key applications, and results show major advances in speed, cost/performance ratio, power consumption, and scalability.
Wednesday, August 7th, 11:30 am
Andrew Dieckmann is vice president of marketing and applications engineering for the data center solutions (DCS) division at Microchip Technology's Microsemi subsidiary. He is responsible for product management, product marketing, product strategy and the application engineering teams supporting Microsemi's broad portfolio of storage solutions, including SSD controllers, memory controllers, RAID solutions, HBAs, PCIe switches and SAS expanders. Prior to Microchip, Mr. Dieckmann led marketing for Microsemi’s data center business and helped build PMC-Sierra’s enterprise storage business from inception to an industry leadership position. He earned an electrical engineering degree from Lakehead University in Ontario, Canada.
Data centers have the formidable task of improving operating efficiency and maximizing their IT investments in hardware infrastructure in the face of evolving and varied application requirements. New architectures are needed to better utilize and optimize hardware assets spanning storage, compute and memory. Enabling resource agility where physical compute, storage and memory resources are treated as composable building blocks is a key to unlocking efficiencies and eliminating stranded and underutilized assets. We will explore the innovation that is possible with a flexible infrastructure for storage, compute and memory, examine primary barriers to adoption and highlight technology areas that both the industry and vendors like us are enabling to meet the needs of a composable platform.
Wednesday, August 7th, 1:00pm
Chris leads software engineering for the CUDA platform, DGX systems, data-center distributed systems such as Kubernetes, and NGC, a registry for accelerated solutions on NVIDIA platforms; software that spans supercomputers, clouds, workstations, robots, and self-driving cars. Over his career he's worked in diverse areas from many-core computer architecture, compiler and performance tools engineering, embedded systems, microwave communication, networking, and distributed systems. Chris has a BS in Computer Engineering from the University of Illinois at Urbana-Champaign.
Michael Kagan is a co-founder and CTO of Mellanox Technologies where he focuses on using high-speed networking to improve application performance. He works on problems in high-performance computing, cloud computing, and megawebsites. He has been a leader in establishing new standards for high-speed networking, with a particular emphasis on RDMA over Converged Ethernet (RoCE) and the Infiniband technology. Before joining Mellanox, Mr. Kagan worked at Intel where he managed the Pentium MMX design and the PC product group. He holds a BSEE from the Technion — Israel Institute of Technology.
Today GPUs (Graphics Processing Units), are driving the mathematically intensive artificial intelligence (AI) and machine learning (ML) applications which have the promise of revolutionizing almost every aspect of our world. They are also data hungry and fast, which means they can consume terabytes of data very quickly. Often more data than can be stored locally. By interfacing GPUs to ultra high performance InfiniBand and Ethernet storage networks you solve the local storage capacity limitation. Especially when the networked storage in flashed based. Join the leaders, NVIDIA and Mellanox, in GPUs and High Performance Networking, to understand how these two amazing technologies can be combined to turbocharge the AI and ML revolution.
Wednesday, August 7th, 1:30 pm
Mr. Thad Omura is an accomplished technology business executive that has been instrumental in driving startups to IPO and high-value acquisitions, while also having public company experience as a corporate executive. Since April 2015, Thad has served as EVP of Marketing and Operations for ScaleFlux responsible for all business, marketing, and expanding the growth opportunities for Computational Storage. Prior to ScaleFlux, Thad was VP of Product & Customer Management for Seagate Technology’s Flash and SSD business. He stepped into that role through Seagate’s acquisition of the LSI Flash products in 2014, including SandForce Flash Processors, and PCIe SSDs. Thad was an early executive at SandForce in 2008 as VP of Marketing, and remained with the organization through the LSI and Seagate acquisitions. Prior to SandForce, Thad was with Mellanox Technologies as VP of Product Marketing, and helped drive the company to an IPO in 2007 based on success of its industry-leading networking solutions. Prior to Mellanox, Thad served in various marketing and sales roles at Motorola SPS, Marvell, Galileo Technology, and Quality Semiconductor. Thad holds a BS degree in EECS from UC Berkeley.
Data-driven applications such as databases, analytics, AI/ML, VR, and IoT are everywhere today – and gobbling up more data all the time. How can we get them the processing power they need today and the even larger amounts necessary tomorrow? Computational storage is the answer! It brings compute to the data, thus distributing the workload rather than straining central resources. After all, more compute comes along for the ride when you add more storage to handle larger data stores. Case studies show promising results for popular storage engines such as RocksDB and common storage functions such as compression/decompression. Now is the time for IT managers to deploy computational storage to provide the solutions they need in clouds, data centers, content distribution networks, robotics, autonomous vehicles, and high-performance computing.
Wednesday, August 7th, 2:10 pm
Nigel Alvares is Vice President of Marketing for the Flash Business Unit at Marvell Semiconductor. He is responsible for defining and driving new product solutions spanning edge to cloud data center segments. Before joining Marvell, Nigel worked for Inphi, a high-performance mixed-signal semiconductor innovator, where he helped launch innovative non-volatile memory chipsets for emerging NVDIMMs. Prior to Inphi, Nigel was a founding member of PMC-Sierra’s Enterprise Storage Division and helped build it into the company’s largest business, managing its widely deployed storage controller and disk interconnect products. He has over 20 years of experience in data storage and networking. He earned a BSEE from McGill University (Montreal, Canada) and an MBA from Simon Fraser University (Vancouver, Canada).
As billions of devices come online and begin generating zettabytes of data, new architectures leveraging emerging technologies are essential to handle the onslaught. Chipset innovations play a critical enabling role, powering high-performance edge and local processors, ultra-high-speed Ethernet devices and switches, low-power controller and device solutions, and new 5G wireless systems. New architectures are needed to enable local and distributed processing at all levels. They must bring new levels of scalability that are easy to achieve, integrate machine learning, and provide throughput sufficient for the demands of real-time analysis and high-performance computing. To meet these requirements, it is crucial that chipset architectures are optimized for power consumption and cost while also supporting low latency. Collectively, these requisites result in a tall order, but one that chip makers are addressing with innovations that will accelerate the growing data economy.
Wednesday, August 7th, 2:40 pm
Salil Raje heads the Data Center Group (DCG) for Xilinx, leading a global team of engineering, sales, and marketing professionals dedicated to the data center, the fastest growing market for FPGAs. His group helps top hyperscalers and enterprise cloud providers harness intelligent, adaptable infrastructure to improve performance, power efficiency, and operating costs. He has previously led initiatives in development environments, contributed greatly to expansion efforts in machine learning and vision applications, and improved FPGA design tools. He has over 20-years of experience in the technology industry, holds eight patents in electronic design tools, ASIC, and FPGA designs, and has written more than 15 industry-recognized research papers. He earned a PhD in computer science from Northwestern University.
With the advent of flash storage and persistent memory, storage is no longer a roadblock to better system performance. Data can already be transferred from SSDs at thousands of times the speed of hard drives. Now NVMe and new technologies such as 3D XPoint are putting further pressure on other system resources with ever-higher data rates. How can processors and networks keep up? With traditional system design – they can’t. Heterogenous architectures and computational storage are becoming the answer to this challenge, with FPGAs leading the charge. This long-proven technology can offload protocol overhead, accelerate storage services such as compression and security, and perform local processing for computational storage. FPGAs are fast, flexible, and capable of handling a wide variety of algorithms and procedures. They can meet today’s challenges and support tomorrow’s emerging applications such as AI/ML, real-time analytics, video and image processing, cybersecurity, and 5G wireless.
Thursday, August 8th, 11:00 am
Jihyo Lee is the co-founder and CEO of FADU Technology. Jihyo is a former partner at Bain & Company and a successful serial entrepreneur involved in multiple businesses in technology, telecom and energy. He successfully led and initiated Bain & Capital’s technology sector focus, including semiconductor and display panel components, devices (mobile and TV) and services (cloud and internet/contents). Jihyo has been a C-level advisor to global technology companies, leading projects to solve key strategic issues. As CEO of FADU, he has established FADU as a fabless semiconductor innovator, uniting exceptional industry talent to create a revolution in data center and storage for next generation computing architecturesarchitectures.
Emerging Enterprise applications need extremely high-performance storage and the lowest possible power. To meet the challenges, storage designers must move away from legacy architectures left over from hard drive-based systems. They must look to new form factors, new approaches to NAND management, new ways to offload processors and reduce DRAM caches, new acceleration techniques, and new support for virtualization and end-to-end security. They must utilize the latest NVMe features, scalable systems, SSD architectures (including open-channel), and security standards. The end result for data centers will be much higher storage capacity and performance combined with lower power consumption and higher resistance to security threats.
FADU is a fabless semiconductor company focusing on advanced Flash-based memory storage solutions and systems for next generation computing. FADU is answering the industry’s call to enable future end applications that are computation and storage intensive. The innovative Flash solution designs are independent of legacy storage architectures and include the Annapurna PCIe 3.0 X 4 NVMe SSD controller and Bravo M.2 and Dual U.2 Enterprise SSDs. FADU is setting the benchmark for a low power, high performance and feature rich future.
Thursday, August 8th, 11:40am
Dr. Zining Wu is the co-founder and CEO of InnoGrit Corporation, a fabless semiconductor startup focusing on data storage, management, and processing.
Before founding Innogrit in October 2016, Dr. Zining Wu worked at Marvell for 17 years, where he last served as Chief Technology Officer of Marvell Technology Group. In this role, he was responsible for overseeing all technical aspects of the company, including establishing the company's technical vision, directing strategic initiatives for future growth, and managing central engineering for R&D execution. Prior to his CTO role, Dr. Wu was the Vice President of Data Storage Technologies at Marvell, where he was responsible for innovative storage technologies for the hard disk drive and SSD controllers.
Dr. Wu holds a Bachelor of Science degree in Electronic Engineering from Tsinghua University in Beijing, China, and a Master of Science Degree and Ph.D. in Electrical Engineering from Stanford University.
As the SSD industry migrates from PCIe Gen3 to Gen4 and even to Gen5, can users really enjoy the doubling of throughput and IOPs? In this presentation, we analyze system-level bottlenecks and discuss how to address them. For PC users, we show that PCIe Gen4 combined with host managed FTL can greatly boost system performance and reduce power. For data centers, tiered storage with fast NAND and hardware offload engines can help unleash the true potential of PCIe Gen4 and produce a high-performance storage system for demanding data center applications.
Thursday, August 8th, 12:10 pm – 1:00 pm (note extended keynote time)
Location: Mission City Ballroom
Jim Handy, Chief Analyst, Objective Analysis
Rob Peglar, President, Advanced Computation and Storage
Dave Eggleston, Principal, In-Cog
The last year has been marked by a boatload of surprises. A price collapse for flash and SSDs, reduced demand, and general oversupply with companies reporting lower revenues and lower earnings. Meanwhile, in the background, we have trade wars and key companies (and countries) being blacklisted and having their executives arrested. What is next? Our fearless panelists will give their opinions on the unpredictable future as American elections draw near, trade and shooting wars loom, Brexit continues to elude completion, and no one feels safe despite a strong economic outlook. Bring your thoughts and opinions to this no-holds-barred example of my favorite company name - "Precision Guesswork"!