Sign in to view Balaji’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
San Francisco Bay Area
Contact Info
Sign in to view Balaji’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
3K followers
500+ connections
Sign in to view Balaji’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Balaji
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Balaji
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Sign in to view Balaji’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
RISC-V International
****-* ***** ** ********
-
******* ***** *******
*******, *********, *** *** ********
-
******** *************
********
View Balaji’s full experience
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View Balaji’s full profile
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
People also viewed
-
Chitu Singh
San Diego, CAConnect -
Rajeev Chandrasekhar
New DelhiConnect -
Harshad Mehta
President at Silicon power Corp.
Malvern, PAConnect -
Kumar Sankaran
San Jose, CAConnect -
Travis Lanier
San Diego, CAConnect -
Jim Keller
Palo Alto, CAConnect -
Sanjay Mehrotra
Los Altos, CAConnect -
Narayanan Kaniyur
Los Altos, CAConnect -
Santhosh Kumar
BengaluruConnect -
Sunita Verma
Group Coordinator(R&D) at Ministry of Electronics &IT
Delhi, IndiaConnect
Explore more posts
-
Matt Rappaport
New Tesla Patent Reveals Clues to an Exciting Future of Humanoid Robots and AI Integration In a interesting discussion, Herbert Ong and Scott Walter dive deep into Tesla's recently published patent application WO2024072984A1 (link in comments). Entitled "Actuator and actuator design methodology," the patent primarily focuses on controlling robot movement through strategic actuator placement. However, the document also unveils important clues about the potential for end-to-end neural net training of humanoid robots, reminiscent of Tesla's approach to training its vehicles with FSD. Ong and Walter explore the concept of personalized humanoid skills. Imagine purchasing a bot with built-in capabilities, then having the option to augment and customize those skills through a "Skills Store" akin to Apple's App Store. This opens up a world of possibilities for tailoring robots to specific tasks and preferences. The discussion also touches on the potential for various robot form factors, each designed to excel at particular skills. Picture a tennis-playing robot that comes pre-trained with basic volleying instructions, but requires a "Skills Upgrade" to perform at a master level. This raises questions about the need to manufacture and train a diverse range of robot body types and sizes, mirroring how humans possess ideal physiological characteristics for different activities. Later, Ong and Walter discuss a scenario in which Tesla builds the hardware – including automobiles, humanoids, and other bot form factors – while xAI develops the brains. This arrangement opens up opportunities for licensing agreements between the two companies, that may help the companies navigate future antitrust concerns as these technologies are deployed at scale. The convergence of advanced robotics and artificial intelligence holds the potential to catalyze a profound societal transformation. As Tesla and xAI continue to push the boundaries of what's possible, I can envision a future where intelligent, adaptable robots become integral to various aspects of our lives – from manufacturing and transportation to healthcare, recreation, and beyond. The seamless integration of cutting-edge hardware and software will revolutionize the way we live and work, ushering in an era of unprecedented efficiency, productivity, and innovation. While there are still challenges to overcome and ethical considerations to address, the prospect of these combined technologies driving positive change on a global scale is awe-inspiring. #FrontierFrontier #Humanoids #AI #Robots
71 Comment -
Pierce R. Neinken
📈 AI Startup Sierra Expands HQ in San Francisco (4K to 41K SF) 🌁 Artificial intelligence firm Sierra is making a bold move by relocating to a larger office within San Francisco. The startup has signed a lease for a 41,104-square-foot space at 235 Second St., significantly upsizing from its current 4,000-square-foot office. 🏢 This move to San Francisco's South of Market neighborhood, where office vacancy rates have started to decline for the first time since 2020, is a positive sign for the city's hard-hit office market. The area's office vacancy rate, though still high at 28.8%, is slightly lower than six months ago. 📉 The deal contributes to the increase in leasing activity in San Francisco, which is up by about 1 million square feet compared to last year. This uptick is driven by the region's reputation as a leading national hub for AI companies. 💼 Sierra joins other AI firms expanding in the South of Market area, including Scale AI, which signed the city's largest office lease of 2024 with a 178,000-square-foot deal. So far this year, office tenants have signed leases totaling 750,000 square feet in South of Market, a significant increase from last year's 370,000 square feet. 🚀 Co-founded in 2023 by former Salesforce CEO Bret Taylor and former Google executive Clay Bavor, Sierra raised $85 million earlier this year to develop a customer support AI tool for businesses. The firm is set to relocate from its current space at 215 Second St. by next month. San Francisco's reputation as an AI hub, bolstered by its talent pool and six major AI research institutions, continues to attract companies. Recent significant deals include OpenAI’s sublease of over 400,000 square feet of Uber’s former headquarters and Anthropic’s takeover of Slack's former headquarters. These developments reflect a resilient and adaptive real estate market. As a commercial real estate professional, I’m excited to see how these trends shape the future of office space in our vibrant city.
18 -
Matt Rappaport
I've been thinking a lot about how Tesla's vehicle infrastructure can leverage inference compute, distributing computational load and reducing data center energy usage. Dr. John Gibb does a nice job of summarizing these concepts in his video (Starting around 8:10). Inference Arbitrage: A Game Changer for AI Compute As AI continues to evolve, the demand for computational power is skyrocketing. Giants like OpenAI, Meta, and Google are pouring billions into training and inference compute, facing looming shortages of transformers and electricity. Here's where inference arbitrage, a novel concept leveraging Tesla's infrastructure, comes into play. The Challenge: Soaring AI Compute Costs Leading AI companies are investing billions to power their AI programs. However, the energy and computational needs are outpacing infrastructure capabilities. This bottleneck isn't just financial; it's physical. The growing demand for AI services could soon exceed available power and hardware. The Solution: Inference on the Edge xAI, in collaboration with Tesla, offers a groundbreaking solution: using Tesla vehicles for AI inference tasks. Tesla's fleet, equipped with powerful hardware and large batteries, can perform AI inference during idle times (about 90% of the time), reducing strain on central data centers How Inference Arbitrage Works Distributed Computing: Instead of relying solely on massive, centralized compute farms, inference requests (e.g., generating an email or image) are processed by idle Tesla vehicles. This distributed approach taps into the existing infrastructure efficiently. Energy Optimization: Tesla vehicles can perform inference tasks using their batteries during peak electricity demand times, avoiding high energy costs. At off-peak times, they recharge at lower costs, ensuring energy-efficient operations. Commercial Benefits: This model allows Tesla owners to earn money by utilizing their vehicle's idle time for AI computations. Tesla benefits from additional revenue streams with minimal overhead, and xAI gains a scalable, cost-effective solution for AI service delivery. Impact on the AI Industry By integrating AI inference with Tesla's distributed compute network, xAI can deliver services faster and more economically than competitors. This innovative use of existing resources not only addresses the infrastructure bottleneck but also offers a sustainable path forward for AI growth. Conclusion Inference arbitrage provides a strategic edge for xAI and Tesla, combining AI innovation with smart energy use. As the demand for AI compute continues to rise, this approach could redefine the landscape, positioning xAI and Tesla as leaders in efficient, scalable AI service delivery. Stay tuned as this exciting development unfolds and reshapes the future of AI. #ElectricVehicles #InferenceCompute #DataCenters #Electrification #AI #FrontierTechnology #FutureFrontier
41 Comment -
Roland Manger
"Despite some initial skepticism, Codasip’s tools-first business model has proven to have legs, and, arguably, the company is more attractive today than other RISC-V IP rivals like SiFive or Andes. That’s good timing, as the RISC-V IP market is due some M&A activity, we think. The ability to customize within boundaries should both de-risk and speed up core development" concludes Computing analyst David Harold from Jon Peddie Research. Also noteworthy: Based on a few distinct "template" processor cores, strategically positioned inside the total application space, existing processor IP can be customized in an automated way to create optimally co-designed software and hardware systems for almost any application. As David alludes to in his comment re higher-end cores, Codasip will be covering the application space step by step without having to put a large number of inflexible cores in place and without charging an arm and a leg.
382 Comments -
David Hauser
NVIDIA's market cap surpassed $2.8 trillion. Yet, Jensen Huang never completed its business plan. As he says, business plans are easy to write. Anyone can write them. I have been stressing this lately - Business plans and pitch decks are just tools for guiding strategy and communicating vision. In fact, startups deviate a lot from their early business plans. How do you secure investments then? At early stages, the team gets funded, not the startup. Investors look for specific qualities in the founding team. Some qualities that matter: > Background and Expertise: Proven track records and relevant experience are crucial. Demonstrate your skills and accomplishments. > Vision: Investors need to understand and believe in your idea. > Passion and Perseverance: Investors want to see that you will push through challenges. > Ability to Adapt: The startup landscape is dynamic. Flexibility and quick learning are important traits. That’s how the CEO of NVIDIA convinced the investors - through his reputation. Startup building starts way before its launch. Focus on what makes the most impact first. . . #entrepreneurship #venturecapital #startup #mergers #acquisitions Serial Entrepreneur & Investor Helping Startups Become Unstoppable – David Hauser
20810 Comments -
KEVIN K.
NVIDIA Isaac AMR End-to-End Autonomy Platform for Next Generation AMRs. Isaac AMR is an autonomy platform to simulate, validate, deploy, optimize and manage fleets of autonomous mobile robots for safely operating in large, highly dynamic, unstructured environments. It consists of edge-to-cloud software services and compute hardware combined with a set of reference sensors and robot hardware to accelerate development and deployment of AMRs, saving OEMs several years and millions of dollars. Contact KENNEDY ROBOTICS AI to learn more about NVIDIA ISAAC AMR. KENNEDY ROBOTICS AI kennedyrobotics.ai #KENNEDYROBOTICSAI #NVIDIAISAACAMR #NVIDIAISAAC #AMR #ISAACAMR
-
Navin Chaddha
Today's news of Frore Systems's $80 million Series C, led by Fidelity Investments, and the ability of its solid state, active cooling Airjet semiconductors which can now cool up to 100 TOPS of AI workload running on NVIDIA Jetson Orin platforms in thin, silent, dust proof and vibration free form factors, is a major milestone. This achievement was powered by three factors: - Founders who focus on their vision (congrats Seshu Madhavapeddy, Suryaprakash Ganti and Frore Systems team); - A line of differentiated deeptech products (GPU-adjacent semiconductors which replace the 50 year technology of the fan); - The critical role of heat dissipation in making Edge AI and Datacenter AI a reality. We partnered with the Frore team at the ideation stage, welcome them as a great example of an AI Cognition-as-a-Service (CaaS) company, and are excited for this latest milestone in their journey. Link to story in comments.
19514 Comments -
David Nicholson
Here is an example of what's happening in the world of AI that calls into question the assertion by some that NVIDIA has a "monopoly on AI". Never underestimate the ability of markets to seek out margin bloat, no matter how well deserved it may be, and nibble away. Return On Investment considerations will bring rationality to this game. They always do. It is only a question of time.
13 -
Jeffrey Cooper
Cerebras Systems - Pioneering Wafer-Scale AI Computing (2 min read) Cerebras Systems, based in Sunnyvale, CA, is pioneering a new class of computer systems to accelerate AI by orders of magnitude. The company was founded in 2015 and is led by CEO Andrew Feldman, a seasoned Silicon Valley entrepreneur who previously co-founded and led SeaMicro (acquired by AMD for $357 million) and had roles at companies like Force10 Networks and Riverstone Networks. At the core of Cerebras' solution is the Wafer Scale Engine (WSE), the largest chip ever built with 2.6 trillion transistors. The WSE powers systems like the CS-2, enabling breakthroughs across industries. Major customers include TotalEnergies in energy, GlaxoSmithKline, and AstraZeneca in pharma, the Mayo Clinic for medical AI, and national labs like Argonne and Lawrence Livermore. Cerebras has around 335 employees but a major industry impact, with customer commitments approaching $1 billion. With over 400,000 AI cores, massive memory bandwidth, and ultra-fast interconnects, the CS-2 delivers cluster-scale performance in a single system. This allows for cutting training times from months to minutes on complex workloads like seismic modeling, drug discovery, and COVID research. Key verticals are pharmaceuticals/healthcare, energy/industrial, scientific research, and AI/ML. Cerebras removes current hardware constraints, allowing researchers to explore novel architectures and drive previously impossible innovations. As AI complexity grows, wafer-scale computing solutions like those developed by Cerebras Systems may be increasingly important in accelerating AI workloads and enabling new innovations across various industries. Credit: Jeffrey Cooper & Perplexity.ai For more on AI, robots, and Semicon, check out my blog: https://lnkd.in/eWESid86
141 Comment -
Ruth Foxe Blader
Happy to join my buddies at the BBC to talk about NVIDIA on BBC Radio 4 this morning! (hour 1:18) https://lnkd.in/eb6JkdJn What's my 🔥 take? ✋ On one hand, #artificialintelligence is the next platform shift, and Nvidia is an important provider of critical infrastructure for #AI. 🤚 But, when Nvidia eclipsed Microsoft and Apple last week as the most valuable company in the US, analysts started scratching their heads. Is Nvidia over-bought? ⏰ Timing is everything. There is no doubt that AI is important. But there are questions as to when it becomes broadly institutionalized. 🤑 Retail investors want access to big private AI companies! While they wait, they might just buy the Nvidia dip! ❓ NOBODY KNOWS!
351 Comment -
Benjamin Wolkon
"Advanced computing is starting to serve utilities..." As the rise of AI is contributing to a near-doubling of five-year energy load forecasts (from 2.6% to 4.7% growth), AI is also at the core of some of the innovation to solve the biggest problems in climate and energy. A couple of weeks ago I was in a room of investors who were asked if it's "too early" to be investing at the nexus of AI and climate. I was surprised to even hear the question, because we've been doing it for years. This article from Utility Dive highlights three companies in which MUUS Climate Partners was an early investor: - BrightNight, which created an advanced simulation tool to optimize the design and operations of clean energy projects; - Amperon, which has built the world's most accurate energy demand forecasting system; - Utilidata, which has partnered with NVIDIA to deliver the first distribution system AI platform. If you're interested in the AI-climate nexus (and not just the buzzwords, but the actual solutions being built and deployed), this article might be of interest. https://lnkd.in/evkkY4nX
121 Comment -
Jessie Chuang
#Semiconductor companies related to #AI infrastructure are hot, in both the public market and private early-stage stage. At-scale AI application requires new infrastructures, which lead to opportunities at the silicon and hardware level. Thus far this year, VC-backed chip startups have raised nearly $5.3 billion in just 175 deals, per Crunchbase data. https://lnkd.in/gC2FZwC6 Previous articles from us already feature reasons of these trends: The WHY behind Impressive IPO of AI Infra Startup 👉 AI Compute's Bottleneck Lies in #Connectivity 👉 Advanced Packaging and Si #Photonics https://lnkd.in/eN35P5YY Next AI Infrastructures for New AI Decade 👉 Power Hunger Issue https://lnkd.in/gdcbB6S8 AI Infrastructure Hardware and Software Accrue the Most Value in AI Stack 👉 Big funding from VCs or tech titans going into the software infrastructure of generative AI stack in the past year only increased the devastated demand for AI servers and GPUs – hardware is more a bottleneck than software now.... https://lnkd.in/gYK7aaBP
2 -
Glenn Stuart Harris
Many articles & stories have been written about Nvidia in recent months’ & rightfully so, as the “Darling of a Company” has been very busy in the eyes of Wall Street, Tech, Silicon Valley, Semiconductors, VC’s/Investors, Shareholders, & the General Public as a whole. It had a 10-1 Stock split in June/24’ & it broke through the threshold as the “Most Valuable Company in The World 🌎” with a $3.335 Trillion Valuation, surpassing the likes of Apple🍎AND Microsoft for such an honorable yet intensely sought after market position. A “REMARKABLE” doing!…Having some typical ups & downs in the market, not unusual or uncommon at this extreme “pinnacle”/level reached, it remains near the $3.3 Trillion Valuation. I have refrained from posting during this tumultuous period of leveling rise/“dips” only to make a few points after some steadiness of this position was reached & maintained.💵—-~-~~💵.~Nvidia, an Early Sequoia Capital backed Co. dating back nearly 3 decades to the 90’s beginnings, it’s far become = to about 3.35k Unicorn🦄 CO’s & may become = to over 6k Unicorn🦄 CO’s Valuation wise, but No One can predict The Futute or The Markets, yet the “Demand & Taste for Nvidia Chips🤖seems Strong 💪 & it could equate to Lays’ Potato🥔Chips~You Can’t Want/Need/(Eat) Just One~& NO DIPS”~🙏📉💵
-
Dean Jones
Nvidia's financial performance is on an upward trajectory, showcasing its dominance in the AI chip market. The latest earnings report reveals a remarkable net income of $14.88 billion for Q1, a significant surge from $2.04 billion the previous year. This growth highlights Nvidia's strategic pivot towards expanding cloud infrastructure beyond its traditional hardware focus. CEO Jensen Huang emphasized the rise of "AI factories"—advanced data centers powered by Nvidia's chips, aimed at meeting the increasing demands of AI workloads. These facilities play a vital role in developing and deploying AI applications, including generative AI systems for text and image creation. This shift aligns with the overall trend of cloud spending fueling future AI and data processing capabilities. Furthermore, Nvidia's market approach includes a 10-for-1 stock split to enhance share accessibility and a dividend boost, reflecting strong investor confidence. Amid tech giants intensifying investments in AI infrastructure, Nvidia's cutting-edge chips remain pivotal in this evolving landscape. Analysts caution about the sustainability of this growth in the long run, emphasizing the importance of balancing AI model training with the shift to inference tasks, potentially creating opportunities for more cost-effective alternatives.
21 Comment -
Akash Bajwa
With every new generation of Nvidia chips, we're likely to see a 50% drop in the cost of compute power. This will only accelerate the proliferation of Small Language Models (SLMs) that's begun (see Phi, OpenELM). On the other end of the spectrum of LLMs, As Big Tech announces >$40bn in data centres capex, we're well on our way to building out data centres that will confer a 25-100x increase in compute power relative to what was needed to train GPT-4. These future clusters will be entirely comprised of Blackwell, Blackwell +1 and Blackwell +2 generations of chips. Eventually, to scale to supercomputer levels needed to potentially train GPT-6, we may need to site them on nuclear power plants - this has lots of implications for which nations are better positioned to build out these supercomputers and will inevitably fan the long-running debate around nuclear energy. https://lnkd.in/ezAj-QGW
588 Comments -
Joshua He
Synopsys Inc was a spinout of GE exiting the semiconductor business, much like ASML was a spinout of PHILIPS A lot of the great semiconductor companies were started this way (I don’t know any others) 1. When using AI to design chips, they emphasize absolute correctness, which is different from generative AI that looks like 🪄 inside a SaaS app 2. TSMC and Synopsys were founded within three months of each other, marking the start of an era where fabless giants like NVIDIA AND Broadcom (not you AMD) could flourish. Even Jensen says Synopsys is “mission critical” to their success. 3. In order to stay on an exponential (Moore’s Law) you need to race with people who are crazier than you. Customers like Jensen Huang or Andy Grove who are always paranoid of going out of business. Even though i’m the biggest Intel Corporation supporter, it’s not a surprise that the extremely nonparanoid Brian Krzanich (who was paranoid about the wrong things cough cough) was not able to lockdown the HPC market even when Intel had 99% market share in data center at times. 4. Synopsys and TSMC are so hard to disrupt because they are selling an entire ecosystem of support and decades of experience, where companies like NVIDIA, Intel, and AMD are selling compute at the end of the day. Obviously this is changing as NVIDIA is going full stack with networking as well as end user AI services like inference and model training. 5. Transistor density per area is tapering off, the next 100x comes from 3D packaging like CoWoS, or the best in class Foveros from Intel Corporation, software optimizations, and more specialized hardware, not to mention further density and PPAC from future nodes like Intel Corporation 18A, 14A. These improvements stack, so 2-3x improvement in each area can get 100x or more from here. What does this mean? 1. Samsung has terrible 3nm yields and their product teams have switched from samsung processors and nodes. We can count them out of the foundry race, especially with them needing to focus on their memory division which SK hynix is winning. 2. Use Intel Foundry Services From Acquired #ai #nvidia #hardware #intel #amd #openai #broadcom #samsung #gpu #compute #tsmc
31 Comment -
Fahad Najam
** This is not Financial Advice, so do your own due diligence ** $50B TAM for 1M GPU Clusters? legendary CEO Hock Tan of Broadcom believes so. I had the opportunity to listen to Hock Tan at Arista Networks 10 year IPO anniversary. Hock believes that for generative AI to truly develop AGI or ability to reason capability, it will need 1M GPU clusters and only 2 or 3 Hyperscalers or one or two sovereign customers have the capability to build such massive clusters. Hock Tan estimates the $50B TAM per 1M GPU clusters includes $30B for GPUs (assuming GPU ASP of $30K), $5B for Networking (interesting 6:1 ratio between GPU/compute to Networking), $15B for infrastructure like Power, including power generation, cooling, data center space etc. What plausible business case supports such as massive investment? remains to be seen. Hock admits its more of a moon shot type project but believes we are getting there. The biggest inhibitor to scaling GPU or compute capacity remains power. I think this has profound implications for investors. While power generation capacity takes long time to come online (7 - 8 years) and new data center buildouts take (2 - 3 years), the only way for Hyperscalers (like Amazon Web Services (AWS), Google and Microsoft) to bring more compute functionality they will have to upgrade their existing brownfield infrastructure. This should be positive for Intel Corporation, AMD and of course NVIDIA, as not only do we have the possibility of more accelerated compute deployments, but also the upgrade of non-AI compute infrastructure. Prior to the AI momentum, the upgrade cycle of traditional compute was stretching to 5 - 6 years, with most hyperscale CFOs pushing for extending the amortization schedule of the compute assets. Power limitations and growing demand for AI capabilities will force these hyperscalers to rethink their amortization schedule and thus I believe the pendulum swings in favor of infrastructure providers. Shorter upgrade cycles, more emphasis on higher, more power efficient technologies is great for networking such as Arista Networks and optics suppliers such as Coherent Corp., Lumentum etc. Interestingly, Hock Tan believes that as AI LLM models achieve AGI capability they will begin to generate their own data(thus the ability to reason) and will not require massive external data to be trained on, thus representing significant implications for wide-area bandwidth requirements. Would love to hear Bill Gartner's thoughts on this. Hock Tan also believes that power limitations will shift the balance in favor of customer ASICs vs. GPUs. While ASICs in general are more power efficient than GPUs, I think this outcome entirely depends on the maturity of the LLM models. In the past the hyperscalers optimized for $1/Gigabit of bandwidth, but now they need to optimize for power/Gigabit. This has profound implications for networking and optics supply chain. Would love to get your thoughts.
3919 Comments -
Tarak Rindani
#NVIDIA releases "Digital Human Microservices", paving way for future of #GenerativeAI avatars. Companies in #CustomerService, #gaming and #healthcare are the first to adopt ACE technologies to simplify creating, animating and operating lifelike #DigitalHumans across customer service, #telehealth, gaming and #entertainment.
1 -
Marko L.
During the World Artificial Intelligence Conference (WAIC) in Shanghai this weekend, SenseTime 商汤科技 announced SenseNova 5. 5. A Hong-Kong based firm, SenseTime, says that their latest AI they have created called SenseNova is capable of processing text, images and videos in real time. Whereas, the performance comparisons become the core of marketing strategies, in reality, speed, security, API, support from other AI tools like LangChain, LlamaIndex, and more, playground, and other tools are more decisive. Any way, the competition in the AI innovation regard has just begun.
2 -
TechCon SoCal
With 2 days to go to the largest gathering of investors, innovators, and entrepreneurs in Southern California, excitement is brewing for Sameer Wasson's keynote session, "IP as the Catalyst for Future Semiconductor Growth," at TechCon SoCal. As the CEO of MIPS, Wasson is poised to deliver valuable insights into the role of intellectual property (IP) in driving innovation and growth within the semiconductor industry. Attendees can anticipate an engaging exploration of how IP strategies are shaping the future landscape of semiconductor technologies. In the current landscape of semiconductor development, Wasson will shed light on the pivotal role that intellectual property plays as a catalyst for innovation and growth. With the semiconductor industry experiencing rapid evolution and increasing competition, effective IP strategies are essential for companies to differentiate themselves and stay ahead of the curve. Wasson's session promises to provide valuable perspectives on how leveraging IP assets can fuel advancements in semiconductor technologies and propel industry growth in the years to come. Reflecting on the past, Wasson will trace the historical significance of intellectual property in shaping the trajectory of the semiconductor industry. From the early days of semiconductor innovation to the present era of complex microprocessor designs and advanced semiconductor architectures, IP has been a driving force behind technological progress. Wasson's retrospective analysis will offer insights into key milestones and challenges in the intersection of IP and semiconductor development, providing attendees with a deeper understanding of the industry's evolution. Get ready to uncover the power of IP in semiconductor growth – don't miss Wasson's keynote at TechCon SoCal! OPEN Silicon Valley TIE San Diego Interlock Capital Orange County OC Startup Council The Brink SBDC EvoNexus UC San Diego SDSU College of Sciences Pismo Ventures Cross Ocean Ventures Startup Steroid - Deal Flow | SPVs | Demo Day Platform Alliance for Southern California Innovation Spark Growth Ventures SoCal Startup Day Reflect Ventures Wharton Alumni Angels NuFund Venture Group Flok Labx Ventures Mercury Pankaj Kedia Serhat Pala Neal Bloom David Saxton Cheryl K Goodman Ellen M. Chang Niraj Desai Akash Pai Sufyan Subzwari Jonah Peake 🐺 Ella Napata Stephen Silver Fred Grier Scott Fox 🦊 Atlás Blake Eric Eide #TechconSoCal2024 #IndustryExperts #TechConference #InnovationSummit #TechTalks #startupsteroid #innovations #technologynews #networking #artificialintelligence #digitalhealth #robotics #startups #AngelInvestors #vc #SandDiegoStartup #founders #consumertech
6
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More