Connect with us

Uncategorized

Tech is having a reckoning. Tech investors? Not so much.

Published

on

On January 10, Charlie O’Donnell, a startup investor who runs Brooklyn Bridge Ventures, published a blog post that he hoped would inspire self-reflection among his peers in the industry. Provocatively titled Seed Investments in Insurrection, his argument was that venture capitalists needed to wrestle with their impact on democracy. 

“It’s kind of hard to make money if the long-term consequences of your investments threaten the free and open democracy that underpins our society,” he wrote, “an extreme statement—until this past week,” when “domestic terrorists, at least in part radicalized on one-time venture capital backed platforms like Facebook, YouTube, Twitch and Twitter stormed the US Capitol building.” 

The events in Washington forced technology companies to face a public reckoning over their role in promoting and amplifying extreme content. For years, critics called for social media platforms to enforce their own policies on hate speech, harassment, and incitement to violence, but the companies largely resisted. In the wake of the attack on the Capitol, however, they began taking action. Facebook and Instagram disabled Trump’s ability to post until after the inauguration, Twitter banned the president as well as 70,000 QAnon-related accounts, and YouTube prevented Trump’s account from posting for seven days.

Newer spaces, too, had more attention paid to them—especially those which have capitalized on the moment, from explicitly right wing spaces like Parler and Gab, which have subsequently been censured by Apple and Amazon, to the encrypted messaging app Telegram, which has seen millions of users sign up in recent weeks

But, as O’Donnell points out, one critical part of the technology industry has remained silent: the people who fund these companies. “I think the preference of most people is to stay out of things if they seem controversial,” he says.

“They wrote a check and moved on”

“Right now, they want to keep a low profile,” says Roger McNamee, who was one of the earliest investors in Facebook but has since become one of social media’s most vocal critics. “Many of them are associated with these platforms that are causing all the trouble, and they don’t want anybody to make that connection.”

Mitch Kapor, an early software entrepreneur turned investor, has long been outspoken about the role investors must play in acting responsibly and holding companies to account. He and his wife, venture capitalist and impact investor Freada Kapor Klein, were among the earliest to put money into Uber—but openly criticized the company in 2017 after sexual harassment claims by former employee Susan Fowler. 

“They put out a statement, they wrote a check, and they moved on, without changing how they do business”

Freada Kapor Klein

For years, the duo has called on fellow VCs to do better, and there have been moments of reckoning, including in the aftermath of the killing of George Floyd last summer. At best, however, investors’ actions have been “externally facing,” says Kapor Klein. “They put out a statement, they wrote a check, and they moved on, without changing how they do business.” (On the other end of the spectrum, some VCs echoed Donald Trump’s “few bad apples” ambivalence about white supremacist groups like the Proud Boys, while others applauded cryptocurrency company Coinbase’s decision to ban discussions of racial equity and politics from the office completely.)

But even the minimal level of self-reflection from last summer, Kapor Klein adds, has been largely missing since January 6. (The National Venture Capital Association put out a statement on January 7 condemning the “domestic terrorist attacks” but public positions from firms and individual investors, who have more influence on startup culture, have been extremely rare.) 

For Mitch Kapor, today’s investors—who typically sit on the boards of these companies and are meant to guide their strategies—are trying to avoid being held responsible. 

“They just retreat into silence,” he says. “They don’t want to admit that they’ve created a disaster that they bear responsibility for.” 

In fact, according to corporate development adviser Arjun Gupta, VCs are more concerned with the optics of being drawn into the political fight. Most of them, he says, feel uneasy about the suggestion that investors should have any say on the politics of the companies in their portfolio—or of the user bases that they court.

He has had multiple conversations with venture capitalists on behalf of his clients since January 6, and says that some VCs are discussing pulling investments as a “risk mitigation strategy.” Rather than concern over the impact of the platforms they fund, he suggests they are afraid of “pressure from their staff” or the institutional investors whose money is managed by VCs to take action. Their aim is to avoid getting “sucked into this shitstorm … of political discourse.”

Some participants say that conversations about accountability are happening in private, including on Clubhouse, the audio-based social network that is popular among Silicon Valley investors and has also faced criticism for its slow response to its harassment problems. (O’Donnell’s Brooklyn Bridge Ventures is an investor in Clubhouse.)

But critics who call for actions such as deplatforming extremist content say they are not asking for companies to police political views, but rather, take action when those views are expressed in hate speech and incitements to violence—and to ensure that companies develop and apply content moderation policies. So why are investors so reticent to hold their companies to account?

We are a catalyst of this

While their users might be on the fringe of the political spectrum, many “alt tech companies” are not outsiders in the technology industry. Most are embedded in the Silicon Valley startup and fundraising system that often puts the potential for growth higher than utility or need. Services like Gab, MeWe, Minds, DLive, and Clouthub— which have been slow or unwilling to remove hate speech, conspiracy theories, and threats of violence, sometimes in violation of their own terms of service—have all received money as part of the pipeline of incubators, crowdfunders, angel investors, fundraising, and acquisitions. 

They have also been indirect beneficiaries of the insurrection at the Capitol, with spikes in users as a result of the mainstream services’ deplatforming President Trump, his surrogates, and accounts promoting the QAnon conspiracy.

In a few cases, public pressure has forced action. DLive, a cryptocurrency-based video streaming site, which was acquired by BitTorrent’s Tron Foundation in October 2020, suspended or permanently banned accounts, channels, and individual broadcasts after the Southern Poverty Law Center identified those that livestreamed the attack from inside the Capitol building.

Neither Tron Foundation, which owns DLive, nor Medici Ventures, the Overstock subsidiary that invested in Minds, responded to requests for comment. 

EvoNexus, a Southern California-based tech incubator that helped fund the self-described “non-biased” social network CloutHub, forwarded our request for comment to CloutHub’s PR team, who denied that its platform was used in the planning of the insurrection. They said that a group started on the platform and promoted by founder Jeff Brain was merely for organizing ride sharing to the Trump rally on January 6. The group, it said, “was for peaceful activities only and asked that members report anyone talking about violence.”  

But there’s a fine line between speech and action, says Margaret O’Mara, a historian at the University of Washington who studies the intersection between technology and politics. When, as a platform “you decide you’re not going to take sides, and you’re going to be an unfettered platform for free speech,” and then people “saying horrible things” is “resulting in action,” then platforms need to reckon with the fact that “we are a catalyst of this, we are becoming an organizing platform for this.” 

“Maybe you wouldn’t get dealflow”

For the most part, says O’Donnell, investors are worried that expressing an opinion about those companies might limit their ability to make deals, and therefore make money.

Even venture capital firms “have to depend on pools of money elsewhere in the ecosystem,” he says. “The worry was that maybe you wouldn’t get dealflow,” or that you’d be labeled as “difficult to work with or, you know, picking off somebody who might do the next round of your company.” 

Despite this, however, O’Donnell says he does not believe that investors should completely avoid “alt tech.” Tech investors like disruption, he explains, and they see in alt tech the potential to “break up the monoliths.” 

“Could that same technology be used for coordinating among people in doing bad stuff? Yeah, it’s possible, just in the same way that people use phones to commit crimes,” he says, adding that this issue can be resolved by having the right rules and procedures in place. 

“There’s some alternative tech whose DNA is about decentralization, and there’s some alt-tech whose DNA is about a political perspective,” he says. He does not consider Gab, for example, to be a decentralized platform, but rather “a central hosting hub for people who otherwise violate the terms of service of other platforms.”

“It’s going to be pissing in the wind… because that guy over there is going to be in.”

Charlie O’Donnell

“The internet is decentralized, right? But we have means for creating databases of bad actors, when it comes to spam, when it comes to denial of service attacks,” he says, suggesting the same could be true of bad actors on alt tech platforms. 

But overlooking the more dangerous sides of these communications platforms, and how their design often facilitates dangerous behavior is a mistake, says O’Mara. “It’s a kind of escapism that runs through the response that powerful people in tech … have, which is just, if we have alternative technologies, if we just have a decentralized internet, if we just have Bitcoin” … then everything will be better.

She calls this position “idealistic” but “very unrealistic,” and a reflection of “a deeply rooted piece of Silicon Valley culture. It goes all the way back to, ‘We don’t like the world as it is, so we’re gonna build this alternative platform on which to revise social relationships.’” 

The problem, O’Mara adds, is that these solutions are “very technology driven” and “chiefly promulgated by pretty privileged people that … have a hard time … [imagining] a lot of the social politics. So there’s not a real reckoning with structural inequality or other systems that need to be changed.”

How to have “a transformational effect”

Some believe that tech investors could shift what kind of companies get built, if they chose to. 

“If venture capitalists committed to not investing in predatory business models that incite violence, that would have a transformational effect,” says McNamee.

At an individual level, they could ask better questions even before investing, says O’Donnell, including avoiding companies without content policies, or requesting that companies create them before a VC signs on.

Once invested, O’Donnell adds that investors can also sell their shares, including at a loss, if they truly wanted to take a stand. But he recognizes the tall order that this would represent—after all, it’s highly likely that a high-growth startup will simply find a different source of money to step in to the space that a principled investor just vacated. “It’s going to be pissing in the wind,” he says, “Because that guy over there is going to be in.”

In other words, a real reckoning among VCs would require a reorientation of how Silicon Valley thinks, and right now it is still focused on “one, and only one, metric that matters, and that’s financial return,” says Freada Kapor Klein.

If funders changed their investment strategies—to put in moral clauses against companies that profit from extremism, for example, as O’Donnell suggested—the impact that this would have on what startup founders chase would be enormous, says O’Mara. “People follow the money,” she says, but “it’s not just money, it’s mentorship, it’s how you build a company, it’s this whole set of principles about what success looks like.” 

“It would have been great if VCs who pride themselves on risk-taking and innovation and disruption … led the way,” concludes Kapor Klein. “But this tsunami is coming. And they will have to change.”

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

An AI is training counselors to deal with teens in crisis

Published

on

Counselors volunteering at the Trevor Project need to be prepared for their first conversation with an LGBTQ teen who may be thinking about suicide. So first, they practice. One of the ways they do it is by talking to fictional personas like “Riley,” a 16-year-old from North Carolina who is feeling a bit down and depressed. With a team member playing Riley’s part, trainees can drill into what’s happening: they can uncover that the teen is anxious about coming out to family, recently told friends and it didn’t go well, and has experienced suicidal thoughts before, if not at the moment.

Now, though, Riley isn’t being played by a Trevor Project employee but is instead being powered by AI.

Just like the original persona, this version of Riley—trained on thousands of past transcripts of role-plays between counselors and the organization’s staff—still needs to be coaxed a bit to open up, laying out a situation that can test what trainees have learned about the best ways to help LGBTQ teens. 

Counselors aren’t supposed to pressure Riley to come out. The goal, instead, is to validate Riley’s feelings and, if needed, help develop a plan for staying safe. 

Crisis hotlines and chat services make them a fundamental promise: reach out, and we’ll connect you with a real human who can help. But the need can outpace the capacity of even the most successful services. The Trevor Project believes that 1.8 million LGBTQ youth in America seriously consider suicide each year. The existing 600 counselors for its chat-based services can’t handle that need. That’s why the group—like an increasing number of mental health organizations—turned to AI-powered tools to help meet demand. It’s a development that makes a lot of sense, while simultaneously raising questions about how well current AI technology can perform in situations where the lives of vulnerable people are at stake. 

Taking risks—and assessing them

The Trevor Project believes it understands this balance—and stresses what Riley doesn’t do. 

“We didn’t set out to and are not setting out to design an AI system that will take the place of a counselor, or that will directly interact with a person who might be in crisis,” says Dan Fichter, the organization’s head of AI and engineering. This human connection is important in all mental health services, but it might be especially important for the people the Trevor Project serves. According to the organization’s own research in 2019, LGBTQ youth with at least one accepting adult in their life were 40% less likely to report a suicide attempt in the previous year. 

The AI-powered training role-play, called the crisis contact simulator and supported by money and engineering help from Google, is the second project the organization has developed this way: it also uses a machine-learning algorithm to help determine who’s at highest risk of danger. (It trialed several other approaches, including many that didn’t use AI, but the algorithm simply gave the most accurate predictions for who was experiencing the most urgent need.)

AI-powered risk assessment isn’t new to suicide prevention services: the Department of Veterans Affairs also uses machine learning to identify at-risk veterans in its clinical practices, as the New York Times reported late last year. 

Opinions vary on the usefulness, accuracy, and risk of using AI in this way. In specific environments, AI can be more accurate than humans in assessing people’s suicide risk, argues Thomas Joiner, a psychology professor at Florida State University who studies suicidal behavior. In the real world, with more variables, AI seems to perform about as well as humans. What it can do, however, is assess more people at a faster rate. 

Thus, it’s best used to help human counselors, not replace them. The Trevor Project still relies on humans to perform full risk assessments on young people who use its services. And after counselors finish their role-plays with Riley, those transcripts are reviewed by a human. 

How the system works

The crisis contact simulator was developed because doing role-plays takes up a lot of staff time and is limited to normal working hours, even though a majority of counselors plan on volunteering during night and weekend shifts. But even if the aim was to train more counselors faster, and better accommodate volunteer schedules, efficiency wasn’t the only ambition. The developers still wanted the role-play to feel natural, and for the chatbot to nimbly adapt to a volunteers’ mistakes. Natural-language-processing algorithms, which had recently gotten really good at mimicking human conversations, seemed like a good fit for the challenge. After testing two options, the Trevor Project settled on OpenAI’s GPT-2 algorithm.

The chatbot uses GPT-2 for its baseline conversational abilities. That model is trained on 45 million pages from the web, which teaches it the basic structure and grammar of the English language. The Trevor Project then trained it further on all the transcripts of previous Riley role-play conversations, which gave the bot the materials it needed to mimic the persona.

Throughout the development process, the team was surprised by how well the chatbot performed. There is no database storing details of Riley’s bio, yet the chatbot stayed consistent because every transcript reflects the same storyline.

But there are also trade-offs to using AI, especially in sensitive contexts with vulnerable communities. GPT-2, and other natural-language algorithms like it, are known to embed deeply racist, sexist, and homophobic ideas. More than one chatbot has been led disastrously astray this way, the most recent being a South Korean chatbot called Lee Luda that had the persona of a 20-year-old university student. After quickly gaining popularity and interacting with more and more users, it began using slurs to describe the queer and disabled communities.

The Trevor Project is aware of this and designed ways to limit the potential for trouble. While Lee Luda was meant to converse with users about anything, Riley is very narrowly focused. Volunteers won’t deviate too far from the conversations it has been trained on, which minimizes the chances of unpredictable behavior.

This also makes it easier to comprehensively test the chatbot, which the Trevor Project says it is doing. “These use cases that are highly specialized and well-defined, and designed inclusively, don’t pose a very high risk,” says Nenad Tomasev, a researcher at DeepMind.

Human to human

This isn’t the first time the mental health field has tried to tap into AI’s potential to provide inclusive, ethical assistance without hurting the people it’s designed to help. Researchers have developed promising ways of detecting depression from a combination of visual and auditory signals. Therapy “bots,” while not equivalent to a human professional, are being pitched as alternatives for those who can’t access a therapist or are uncomfortable  confiding in a person. 

Each of these developments, and others like it, require thinking about how much agency AI tools should have when it comes to treating vulnerable people. And the consensus seems to be that at this point the technology isn’t really suited to replacing human help. 

Still, Joiner, the psychology professor, says this could change over time. While replacing human counselors with AI copies is currently a bad idea, “that doesn’t mean that it’s a constraint that’s permanent,” he says. People, “have artificial friendships and relationships” with AI services already. As long as people aren’t being tricked into thinking they are having a discussion with a human when they are talking to an AI, he says, it could be a possibility down the line. 

In the meantime, Riley will never face the youths who actually text in to the Trevor Project: it will only ever serve as a training tool for volunteers. “The human-to-human connection between our counselors and the people who reach out to us is essential to everything that we do,” says Kendra Gaunt, the group’s data and AI product lead. “I think that makes us really unique, and something that I don’t think any of us want to replace or change.”

Continue Reading

Uncategorized

How to land startup funding from Brookfield Asset Management, which manages $600 billion in assets

Published

on

There are big investment firms, and then there are big investment firms. Brookfield Asset Management, the Toronto-based 122-year-old outfit whose current market cap is $63 billion and that oversees $600 billion in assets, clearly falls into the latter camp. Think real estate, infrastructure, renewable power, private equity, and credit. If it falls into a defined asset class, Brookfield probably has it in its portfolio.

That’s also true of venture capital, though venture is new enough to Brookfield that founders who might like its capital are still getting the memo. Indeed, it was a little less than four years ago that Brookfield Technology Partners began investing off the company’s balance sheet and soon after recruited Josh Raffaelli — a Stanford MBA who cut his teeth as a principal with Draper Fisher Jurvetson, then spent another five years with Silver Lake — to lead the practice.

Its existence came as a surprise to him, actually. “I’ve been a tech investor in Silicon Valley,” says Raffaelli. “My entire professional career has been in a 15-minute drive from the house I grew up in. And I had never heard about Brookfield before they started this practice because it’s in businesses. It’s in real estate. It has done things that are not generally tech-enabled.”

Not until fairly recently, that is. Raffaelli and his 11-person team have not only made dozens of bets since then, but they’re currently investing out of a pool of capital that features third party capital in addition to that of Brookfield — which is a first. As for what they are looking for, the idea is help Brookfield reimagine how its many office towers, malls and other real estate might be used or developed or leased or insured. It’s to make Brookfield smarter, better prepared, and more profitable. In return, the startups get industry expertise and a major customer in Brookfield

 

To date, its bets have varied widely, as with Armis, an IoT startup focused on unmanaged device security; Loanpal, a point-of-sale payment platform for solar and other home efficiency products; and Carbon Health, a primary care company that blends real-world and virtual visits. “”We’re getting our themes effectively from the Brookfield ecosystem,” Raffaelli says.

Pulling back the curtain a bit more, Raffaelli says his team writes checks from $25 million to $50 million dollars and that they look for companies with $10 million in revenue that are seeing top-line year-over-year growth of more than 100%. In terms of pacing, they jump into roughly one new deal per quarter.

The fund is also independent and has its own custom committee, but that the committee is made up of the senior managing partners from each line of Brookfield’s businesses. (“These are the people that actually help us translate our investment themes that we’re generating here,” Raffaelli notes.)

To highlight how the operation works, Raffaelli points to Latch, a smart access software business that announced last month that it’s using a blank-check company backed by the real estate giant Tishman Speyer to become publicly traded. Brookfield owns roughly 70,000 multifamily units in North America, “so we have a lot of doors that need a lot of locks,” Raffaelli says. Latch, of course, is not the only smart access lock out there, so Brookfield ran “what was almost like a mini [proposal process], reaching out to all different companies in the market to understand how they compete,” he says.

It was a “six-month exercise,” but ultimately, his group led Latch’s Series B round in 2018 and since then, Brookfield was bought about 7,000 blocks from the business. It’s a meaningful difference, considering that when Brookfield first invested, the company had less than $20 million in bookings and those 7,000 locks have since brought in an additional $10 million to $15 million in revenue, Raffaelli says. “When we buy a lot of things at that stage of a company,” he adds, “we’re meaningfully enhancing their trajectory.”

It’s not a foolproof strategy, doubling down. If Latch’s locks turned out to be lemons (they haven’t), Brookfield would be out a big check along with that capital expenditure. It’s why Brookfield takes its time, says Raffaelli, adding that if he has done his job right, his team is involved with a company well before it is raising a round and shown already that it is a “strategic partner that has another lever.”

Either way, Raffaelli says that while the commercial real estate market has been hard hit by the pandemic, it has, counterintuitively, been a productive time for his group given the stronger incentive it has given the real estate world to adopt tech tools faster. Among the bets about which Raffaelli sounds most excited right now is VTS, for example, a leasing and asset management platform that can show properties remotely, and Deliverr, an e-commerce fulfillment startup that Raffaelli describes as “Amazon Prime for everybody else.”

In fact, Raffaelli convincingly argues that while the use case for a lot of real estate is changing,  the so-called built world remains Brookfield’s strongest competitive advantage given the size of its footprint.  The way he sees it, its options going forward are plentiful. “You’re looking at retail locations becoming ghost kitchens; you’re looking at retail locations turning into distribution and logistics facilities. We can turn physical locations into healthcare sites for [our portfolio company] Carbon Health, and our mall locations into locations for urgent care and primary care clinics for testing and vaccinations.”

It will never be a completely seamless transition. Brookfield has to be “thoughtful” given the pandemic and its devastating impacts, too. But Raffaelli comes across as excited in conversation nonetheless. The idea of turning physical real estate into a “mechanism for change within technology businesses,” adds Raffaelli, is a “very powerful place to be.”

Continue Reading

Uncategorized

Singapore-based Raena gets $9M Series A for its pivot to skincare and beauty-focused social commerce

Published

on

A photo of social commerce startup Raena’s team. From left to right: chief operating officer Guo Xing Lim, chief executive officer Sreejita Deb and chief commercial officer Widelia Liu

Raena’s team, from left to right: chief operating officer Guo Xing Lim, chief executive officer Sreejita Deb and chief commercial officer Widelia Liu

Raena was founded in 2019 to create personal care brands with top social media influencers. After several launches, however, the Singapore-based startup quickly noticed an interesting trend: customers were ordering batches of products from Raena every week and reselling them on social media and e-commerce platforms like Shopee and Tokopedia. Last year, the company decided to focus on those sellers, and pivoted to social commerce.

Today Raena announced it has raised a Series A of $9 million, co-led by Alpha Wave Incubation and Alpha JWC Ventures, with participation from AC Ventures and returning investors Beenext, Beenos and Strive. Its last funding announcement was a $1.82 million seed round announced in July 2019.

After interviewing people who were setting up online stores with products from Raena, the company’s team realized that sellers’ earnings potential was capped because they were paying retail prices for their inventory.

They also saw that the even though new C2C retail models, like social commerce, are gaining popularity, the beauty industry’s supply chain hasn’t kept up. Sellers usually need to order minimum quantities, which makes it harder for people to start their own businesses, Raena co-founder Sreejita Deb told TechCrunch,

“Basically, you have to block your capital upfront. It’s difficult for individual sellers or micro-enterpreneurs to work with the old supply chain and categories like beauty,” she said.

Raena decided to pivot to serve those entrepreneurs. The company provides a catalog that includes mostly Japanese and Korean skincare and beauty brands. For those brands, Raena represents a way to enter new markets like Indonesia, which the startup estimates has $20 billion market opportunity.

Raena resellers, who are mostly women between 18 to 34-years-old in Indonesia and Malaysia, pick what items they want to feature on their social media accounts. Most use TikTok or Instagram for promotion, and set up online stores on Shopee or Tokopedia. But they don’t have to carry inventory. When somebody buys a product from a Raena reseller, the reseller orders it from Raena, which ships it directly to the customer.

This drop-shipping model means resellers make higher margins. Since they don’t have to carry inventory, it also dramatically lowers the barrier to launching a small business. Even though Raena’s pivot to social commerce coincided with the COVID-19 pandemic, Deb said it grew its revenue 50 times between January and December 2020. The platform now has more than 1,500 resellers, and claims a 60% seller retention rate after six months on the platform.

She attributes Raena’s growth to several factors, including the increase in online shopping during lockdowns and people looking for ways to earn additional income during the pandemic. While forced to stay at home, many people also began spending more time online, especially on the social media platforms that Raena resellers use.

Raena also benefited from its focus on skincare. Even though many retail categories, including color cosmetics, took a hit, skincare products proved resilient.

“We saw skincare had higher margins, and there are certain markets that are experts at formulating and producing skincare products, and demand for those products in other parts of the world,” she said, adding, “we’ve continued being a skincare company and because that is a category we had insight into, it was our first entry point into this social selling model as well. 90% of our sales are skincare. Our top-selling products are serums, toners, essences, which makes a lot of sense because people are in their homes and have more time to dedicate to their skincare routines.”

Social commerce, which allows people to earn a side income (or even a full-time income), by promoting products through social media, has taken off in several Asian markets. In China, for example, Pinduoduo has become a formidable rival to Alibaba through its group-selling model and focus on fresh produce. In India, Meesho resellers promote products through social media platforms like WhatsApp, Facebook and Instagram.

Social commerce is also gaining traction in Southeast Asia, with gross merchandise value growing threefold during the first half of 2020, according to iKala.

Deb said one of the ways Raena is different from other social commerce companies is that most of its resellers are selling to customers they don’t know, instead of focusing on family and friends. Many already had TikTok or Instagram profiles focused on beauty and skincare, and had developed reputations for being knowledgeable about products.

As Raena develops, it plans to hire a tech team to build tools that will simplify the process of managing orders and also strike deals directly with manufacturers to increase profit margins for resellers. The funding will be used to increase its team from 15 to over 100 over the next three months, and it plans to enter more Southeast Asian markets.

Continue Reading

Trending