Investors continue to pump money into generative AI tech. Case in point, Replit, anstartup developing a code-generating AI-powered tool called Ghostwriter, this week nearly $100 million ($97.4 million) at a $1.16 billion post-money valuation.
Andreessen Horowitz led the round — a Series B extension — with participation from Khosla Ventures, Coatue, SV Angel, Y Combinator, Bloomberg Beta, Naval Ravikant, ARK Ventures and Hamilton Helmer.
“We are relentless in our mission to empower a billion software developers,” Replit founder and CEO Amjad Masad said in a statement, adding that the new funds — which bring Replit’s total raised to over $200 million — will be put toward further developing the core product experience, expanding Replit’s cloud services and “driving innovation” in AI.
“AI has already brought that future closer,” Masad continued. “We look forward to expanding our offerings for professional developers.”
Based in San Francisco, Replit was co-founded by programmers Amjad Masad, Faris Masad and designer Haya Odeh in 2016. Before creating Replit, Amjad Masad worked in engineering roles at Yahoo and Facebook, where he built software development tooling.
But perhaps its headlining feature is Ghostwriter, a suite of features powered by an AI model trained on publicly available code. Ghostwriter — much like GitHub’s— can make suggestions and explain code, considering what users type and other context from their accounts, like the programming languages they’re using.
Ghostwriter appears to be the driver behind Replit’s recent explosive growth, leading to awith Google Cloud and a user base eclipsing 22 million developers. But like all generative AI tools, it comes with risks — and potentially legal consequences that have yet to fully play out in the courts.
Microsoft, GitHub and OpenAI are beingin a class action lawsuit that accuses them of violating copyright law by allowing Copilot to regurgitate sections of licensed code without providing credit. Liability aside, some legal experts have suggested that AI like Copilot could put companies at risk if they were to unwittingly incorporate copyrighted suggestions from the tool into their production software.
It’s unclear whether Ghostwriter, too, was trained on licensed or copyrighted code. But Replit does note that the code Ghostwriter suggests might contain “incorrect, offensive or otherwise inappropriate” strings.
That includes insecure code. According to a recentout of Stanford, software engineers who use code-generating AI systems are more likely to cause security vulnerabilities in the apps they develop. While the study didn’t look at Replit specifically, it stands to reason that developers who use it would fall victim to the same.
Replit has its work cut out for it, that’s all to say.