Connect with us

Uncategorized

The future of social networks might be audio

Published

on

Every morning, as Nandita Mohan sifts through her emails, her college pals are in her ear — recounting their day, reminiscing, reflecting on what it’s like to have graduated in the throes of a pandemic.

Mohan isn’t on the phone, nor is she listening to an especially personal podcast; she’s using Cappuccino, an app that takes voice recordings from a closed group of friends or family and delivers them as downloadable audio.

“Just hearing all of us makes me value our friendship, and hearing their voices is a gamechanger,” the 23-year-old Bay Area software programmer says. 

Audio messaging has been available for years; voice memos on WhatsApp are especially big in India and WeChat audio messages are popular in China. And the pandemic’s social distancing has made voice memos a easy way for people to stay in touch while bypassing Zoom fatigue. But now a new wave of hip apps are baking the immediacy and rawness of audio into the core experience, making voice the way in which people connect again. From phone calls, to messaging, and back to audio—the way we use our phones may be coming full circle.

The newcomers

The best -known audio-focused network is Clubhouse, the buzzy, invite-only app that debuted to glowing reviews for its talkshow-like twist on the chatrooms of the early internet, making it akin to dropping in on an (online) party conversation.

But Clubhouse’s promise was shattered by its lack of moderation and the unfettered chatter of misogynistic venture capitalists. New York Times reporter Taylor Lorenz, once a fan of the app, was subject to harassment in Clubhouse sessions for calling out one VC’s behavior.

“I don’t plan on opening the app again,” Lorenz told Wired. “I don’t want to support any network that doesn’t take user safety seriously.” Her  experience wasn’t a one-off and since then darker, racist elements have appeared, suggesting the behavior that mars every other social platform also exists beneath Clubhouse’s exclusive, cool veneer.  

Gaming chat app Discord, meanwhile, has exploded during the pandemic. The service utilizes voice over IP software to translate spoken chat into text (an idea that came from video gamers who found typing while also playing impossible).  In June, to tap into people’s need for connection during the pandemic, Discord announced a new slogan—“Your place to talk” — and efforts to make the service appear less gamer-centric. The marketing push seems to have worked: By October, Discord estimated 6.7 million users — up from 1.4 million In February, just before the pandemic hit.

But while Discord’s communities, or “servers,” can be as small and innocent as kids organizing remote-but-simultaneous sleepovers they have also included far-right extremists who have used the service to organize the Charlottesville white supremacist rallies and the recent insurrection at the US Capitol.

In both Discord and Clubhouse, the in-group culture — nerdy gamers in Discord’s case, over-confident venture capitalists for Clubhouse — have led to instances of groupthink that can be, at best, off-putting, and at worst, bigoted. Yet there’s still an appeal to both: Isn’t it cool to talk and literally be heard? After all, that’s the foundational promise of social media: democratization of voice.

Speak and you shall be heard

The intimacy of voice makes audio social media that much more appealing in the age of pandemic social distancing and isolation. Jimi Tele, the CEO of Chekmate, a “text-free” dating app that connects users through only voice and video, says that the intimacy of voice inspired him to launch the app that would be “catfish-proof,” referring to people deceiving others online with fake profiles.

“We wanted to break away from the anonymity and gamification that texting allows and instead create a community rooted in authenticity where users are encouraged to be themselves without judgment,” Tele says. The app’s users start voice memos that average at five seconds, then get progressively longer. And while Chekmate has a video option, Tele says that the app’s several thousand users overwhelmingly favor using their voices. “They are perceived as less intimidating [than video messages],” he says.

This immediacy and authenticity is the reason why Gilles Poupardin created Cappuccino. He wondered why there wasn’t already a product that gathered voice memos together into a single downloadable file. “Everyone has a group chat with friends,” he says. “But what if you could hear your friends? That’s really powerful.”

Mohan agrees. She says that her group of friends switched to Cappuccino from a Facebook messenger chat group, then tried Zoom calls early on in the pandemic. But the discussions would inevitably circle into a highlights reel of big events. “There was no time for details,” she laments. The daily Cappuccino “beans,” as the stitched-together recordings are called, let Mohan’s friend circle keep up to date in a very intimate way — “My one friend is moving to a new apartment in a new city, and she was just talking about how she goes to get coffee in her kitchen,” Mohan says. “That’s something I would never know in a Zoom call, because it’s so small.”

Even legacy social media firms are getting in on the act. In the summer of 2020 Twitter launched voice tweets, 140 seconds of audio, that it dubbed Spaces.

“We were interested in whether audio could add an additional layer of connection to the public conversation,” says Rémy Bourgoin, senior software engineer on Twitter’s voice tweets and Spaces team.

Bourgoin says that the vision is for Spaces to be “as intimate and comfortable as attending a well-hosted dinner party. You don’t need to know everyone there to have a good time, but you should feel comfortable sitting at the table.”

You may have snorted in disbelief reading that Twitter wants to create a space that is “comfortable” and “intimate.” After all, Twitter doesn’t exactly have a stellar track record in creating an online environment that is welcoming and protects vulnerable users from abuse. 

Bourgoin says the group is moving slowly on purpose before releasing Spaces beyond beta and a small group of users, even going so far as to include captioning — a rare accessibility feature on audio networks. “Right now, Spaces can be reported by anyone who is in the Space,” Bourgoin says. “Reports will be reviewed by our team, who will evaluate for violations of the Twitter Rules.”

The ugliness

Ah, moderation. Content moderation on audio is far more difficult than text. Searchable text and automoderators have been used to some success, but human moderators seems to be the most thorough way to block people who don’t abide by community rules — which puts human beings at risk. For platforms where people can jump in at any time and chat, the very democratized medium that makes audio attractive creates a nightmare in moderation. “That’s definitely a huge challenge with any user-generated platform,” says Austin Petersmith, who launched Capiche.fm in beta last year, a site that launched out of a software community that is a bit like a call-in radio show: hosts call each other to start the show, then invite listeners to chime in while they’re “on-air.” 

As users of Clubhouse have learned, voice-only spaces can quickly get ugly just like anywhere else on the internet. People who already suffer from online abuse in text form — marginalized, female or non-binary, non-white, and/or younger — are unlikely to want to make the leap to a place where they can now be abused in a different, harder to police, format. 

There’s also reason to believe these less regulated, newer platforms will be attractive to the hundreds of disaffected, far-right conspiracy-minded extremists and QAnon believers, who are now creating their own podcast networks.

But still, these audio social networks seem to offer something that traditional social media cannot. One of the format’s main benefits is it gives users the immediate connection of a voice or video call but on their own terms. Phone calls — and Zoom calls, for that matter — require some planning. But audio social media is something that can be created and digested at your own convenience in a way that news alerts, notifications, and doomscrolling don’t allow. As Mohan, who listens to her friends every morning says of Cappuccino: “It engages me and forces me to listen more carefully as each person is talking. I even take notes of things I want to respond to and say.”

For Mohan, the recordings from her circle of five friends have become a beloved ritual, allowing her to catch up with her friends at her own pace. “Every day, in the middle of my work day, I’ll record my Cappuccino,” she says, referring to the recording she makes on the app. “It feels really personal. I’m hearing all their voices and I feel on top of what they [her friends] are doing in their day to day.”

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Uncategorized

Atomic, which only funds the startups it launches, just closed its newest fund with $260 million

Published

on

Jack Abraham has a lot of confidence in what he’s building. Then again, you can’t be immodest or unsure of yourself if you’re going to bet exclusively on your own startups as an investor, which is precisely the model that Abraham’s San Francisco-based venture studio, Atomic, has followed since it was launched nine years ago.

It all started with $10 million of Abraham’s own money, capital he amassed by selling his first startup, a local shopping engine called Milo, to eBay in 2010, for $75 million. Abraham had dropped out of Wharton as an undergrad with $500,000 from a professor who believed that Abraham — whose father founded ComScore — would himself be a company-building machine.

The professor had good instincts. After selling Milo at age 24, Abraham spent more than three years building products inside of eBay and learning how to lead multiple teams before beginning to look outward, making angel bets, including on Uber and Pinterest, and, he says, spreading around some of his ideas. (Among these, he says, he “invented Postmates. I gave the founders literally the idea for the company; they were working on a B2B company at the time. I was fairly early on there; that helped spawn the whole food delivery thing.”)

He had so many ideas — hundreds, he says — that not long afterward, he created Atomic with cofounder Andrew Dudum, a Wharton peer who is also the son of entrepreneurs and who also dropped out of college to join the startup world. (Dudum’s first stop was a then-nascent startup backed by Sequoia Capital.)

At first, Atomic worked on one company. The following year, it worked on two. By 2018, the outfit had built out a team that could handle many of the back-end functions that startups need to thrive, from recruiting to accounting, and launched 10 companies. Impressed investors gave the firm $150 million to create even more startups.

By then, Abraham and Dudum had brought in two other general partners: Chester Ng and Andrew Salamon. Salamon left in 2019 to launch his own venture studio, Material, with Blue Apron founder Matt Salzberg. The same year, JD Ross, one of a handful of cofounders of the newly public company Opendoor, joined Atomic as a general partner.

The firm has only picked up speed since. Indeed, at this point, Atomic has created “dozens” of startups — including roughly one per month last year, says Abraham. It also just closed on $260 million in new capital commitments, including from a prominent university that now serves as its anchor investor but would prefer not to be named publicly.

Citing “proprietary aspects” to the model, Abraham declines to explain how Atomic’s economics work, except to acknowledge that it operates in “more of a fund context instead of a holding company” where investors would essentially be buying stakes in Atomic itself.

Certainly, it’s easy to appreciate the enthusiasm of Atomic’s investors, including early backers like Peter Thiel and Marc Andreessen. Abraham and Dudum are both compelling storytellers, as we’ve witnessed first-hand in interviewing them at different times. The firm is also starting to see some exits.

One of Atomic’s creations, the telehealth company Hims, was taken public in January through a blank-check company in a deal that valued the company at $1.6 billion, and its shares have been rising since. As of this writing, the three-and-a-half-year-old outfit — run by Dudum, who is doing double-duty as Hims’s CEO and a general partner with Atomic — boasts a market cap of $2.9 billion.

Atomic also sold a voice-powered sales startup, TalkIQ, to the company Dialpad in 2018 for what Forbes reported at the time to be a “little under $50 million.” TalkIQ had raised $22 million altogether.

More exits are coming, suggests Abraham. “There many companies we have that are now approaching the sort of growth and run rate where they have the ability to go public, even as soon as in the next year,” he says.

One of those eventual prospects is Replicant, an autonomous call center startup that has raised $35 million since its 2017 founding, including a $27 million Series A round led by Norwest Venture Partners back in September. Another Atomic startup, Homebound, a three-year-old home-building outfit that handles everything from financing to construction, has also enjoyed some momentum, as well as attracted $53 million from investors.

Though Atomic prides itself on “pressure testing” its ideas, not every startup has been a hit with users. A photo-sharing app called Ever was quickly shut down after NBC reported that the photos people shared were used to train a facial recognition system — tech the company offered to sell to private companies, law enforcement and the military. A sleep-tracking specialist, Rested, was also shut down.

Meanwhile, ZenReach, a Wi-Fi marketing company that had collected at least $94 million from investors through 2018, laid off 20% of its employees that same year. A CEO who’d been brought aboard by Abraham and who was previously an operating partner with Atomic, has since moved on to a role elsewhere.

If not all of its ideas set the world on fire, Atomic has no shortage of others.

Asked about some of the areas where he sees the most opportunity to innovate, Abraham quickly ticks off “healthcare, finance, education, real estate, and other large industries where truthfully, when you’re inside them, you understand how broken they are, and they are broken up and down the entire stack.

“You study them,” he says, “and then you wonder how is this possible this happened.”

Continue Reading

Uncategorized

Facebook’s Oversight Board already ‘a bit frustrated’ — and it hasn’t made a call on Trump ban yet

Published

on

The Facebook Oversight Board (FOB) is already feeling frustrated by the binary choices it’s expected to make as it reviews Facebook’s content moderation decisions, according to one of its members who was giving evidence to a UK House of Lords committee today which is running an enquiry into freedom of expression online. 

The FOB is currently considering whether to overturn Facebook’s ban on former US president, Donald Trump. The tech giant banned Trump “indefinitely” earlier this year after his supporters stormed the US capital.

The chaotic insurrection on January 6 led to a number of deaths and widespread condemnation of how mainstream tech platforms had stood back and allowed Trump to use their tools as megaphones to whip up division and hate rather than enforcing their rules in his case.

Yet, after finally banning Trump, Facebook almost immediately referred the case to it’s self-appointed and self-styled Oversight Board for review — opening up the prospect that its Trump ban could be reversed in short order via an exceptional review process that Facebook has fashioned, funded and staffed.

Alan Rusbridger, a former editor of the British newspaper The Guardian — and one of 20 FOB members selected as an initial cohort (the Board’s full headcount will be double that) — avoided making a direct reference to the Trump case today, given the review is ongoing, but he implied that the binary choices it has at its disposal at this early stage aren’t as nuanced as he’d like.

“What happens if — without commenting on any high profile current cases — you didn’t want to ban somebody for life but you wanted to have a ‘sin bin’ so that if they misbehaved you could chuck them back off again?” he said, suggesting he’d like to be able to issue a soccer-style “yellow card” instead.

“I think the Board will want to expand in its scope. I think we’re already a bit frustrated by just saying take it down or leave it up,” he went on. “What happens if you want to… make something less viral? What happens if you want to put an interstitial?

“So I think all these things are things that the Board may ask Facebook for in time. But we have to get our feet under the table first — we can do what we want.”

“At some point we’re going to ask to see the algorithm, I feel sure — whatever that means,” Rusbridger also told the committee. “Whether we can understand it when we see it is a different matter.”

To many people, Facebook’s Trump ban is uncontroversial — given the risk of further violence posed by letting Trump continue to use its megaphone to foment insurrection. There are also clear and repeat breaches of Facebook’s community standards if you want to be a stickler for its rules.

Among supporters of the ban is Facebook’s former chief security officer, Alex Stamos, who has since been working on wider trust and safety issues for online platforms via the Stanford Internet Observatory.

Stamos was urging both Twitter and Facebook to cut Trump off before everything kicked off, writing in early January: “There are no legitimate equities left and labeling won’t do it.”

But in the wake of big tech moving almost as a unit to finally put Trump on mute, a number of world leaders and lawmakers were quick to express misgivings at the big tech power flex.

Germany’s chancellor called Twitter’s ban on him “problematic”, saying it raised troubling questions about the power of the platforms to interfere with speech. While other lawmakers in Europe seized on the unilateral action — saying it underlined the need for proper democratic regulation of tech giants.

The sight of the world’s most powerful social media platforms being able to mute a democratically elected president (even one as divisive and unpopular as Trump) made politicians of all stripes feel queasy.

Facebook’s entirely predictable response was, of course, to outsource this two-sided conundrum to the FOB. After all, that was its whole plan for the Board. The Board would be there to deal with the most headachey and controversial content moderation stuff.

And on that level Facebook’s Oversight Board is doing exactly the job Facebook intended for it.

But it’s interesting that this unofficial ‘supreme court’ is already feeling frustrated by the limited binary choices it’s asked them for. (Of, in the Trump case, either reversing the ban entirely or continuing it indefinitely.)

The FOB’s unofficial message seems to be that the tools are simply far too blunt. Although Facebook has never said it will be bound by any wider policy suggestions the Board might make — only that it will abide by the specific individual review decisions. (Which is why a common critique of the Board is that it’s toothless where it matters.)

How aggressive the Board will be in pushing Facebook to be less frustrating very much remains to be seen.

“None of this is going to be solved quickly,” Rusbridger went on to tell the committee in more general remarks on the challenges of moderating speech in the digital era. Getting to grips with the Internet’s publishing revolution could in fact, he implied, take the work of generations — making the customary reference the long tail of societal disruption that flowed from Gutenberg inventing the printing press.

If Facebook was hoping the FOB would kick hard (and thorny-in-its-side) questions around content moderation into long and intellectual grasses it’s surely delighted with the level of beard stroking which Rusbridger’s evidence implies is now going on inside the Board. (If, possibly, slightly less enchanted by the prospect of its appointees asking it if they can poke around its algorithmic black boxes.)

Kate Klonick, an assistant professor at St John’s University Law School, was also giving evidence to the committee — having written an article on the inner workings of the FOB, published recently in the New Yorker, after she was given wide-ranging access by Facebook to observe the process of the body being set up.

The Lords committee was keen to learn more on the workings of the FOB and pressed the witnesses several times on the question of the Board’s independence from Facebook.

Rusbridger batted away concerns on that front — saying “we don’t feel we work for Facebook at all”. Though Board members are paid by Facebook via a trust it set up to put the FOB at arm’s length from the corporate mothership. And the committee didn’t shy away or raising the payment point to query how genuinely independent they can be?

“I feel highly independent,” Rusbridger said. “I don’t think there’s any obligation at all to be nice to Facebook or to be horrible to Facebook.”

“One of the nice things about this Board is occasionally people will say but if we did that that will scupper Facebook’s economic model in such and such a country. To which we answer well that’s not our problem. Which is a very liberating thing,” he added.

Of course it’s hard to imagine a sitting member of the FOB being able to answer the independence question any other way — unless they were simultaneously resigning their commission (which, to be clear, Rusbridger wasn’t).

He confirmed that Board members can serve three terms of three years apiece — so he could have almost a decade of beard-stroking on Facebook’s behalf ahead of him.

Klonick, meanwhile, emphasized the scale of the challenge it had been for Facebook to try to build from scratch a quasi-independent oversight body and create distance between itself and its claimed watchdog.

“Building an institution to be a watchdog institution — it is incredibly hard to transition to institution-building and to break those bonds [between the Board and Facebook] and set up these new people with frankly this huge set of problems and a new technology and a new back end and a content management system and everything,” she said.

Rusbridger had said the Board went through an extensive training process which involved participation from Facebook representatives during the ‘onboarding’. But went on to describe a moment when the training had finished and the FOB realized some Facebook reps were still joining their calls — saying that at that point the Board felt empowered to tell Facebook to leave.

“This was exactly the type of moment — having watched this — that I knew had to happen,” added Klonick. “There had to be some type of formal break — and it was told to me that this was a natural moment that they had done their training and this was going to be moment of push back and breaking away from the nest. And this was it.”

However if your measure of independence is not having Facebook literally listening in on the Board’s calls you do have to query how much Kool Aid Facebook may have successfully doled out to its chosen and willing participants over the long and intricate process of programming its own watchdog — including to extra outsiders it allowed in to observe the set up.

The committee was also interested in the fact the FOB has so far mostly ordered Facebook to reinstate content its moderators had previously taken down.

In January, when the Board issued its first decisions, it overturned four out of five Facebook takedowns — including in relation to a number of hate speech cases. The move quickly attracted criticism over the direction of travel. After all, the wider critique of Facebook’s business is it’s far too reluctant to remove toxic content (it only banned holocaust denial last year, for example). And lo! Here’s its self-styled ‘Oversight Board’ taking decisions to reverse hate speech takedowns…

The unofficial and oppositional ‘Real Facebook Board’ — which is truly independent and heavily critical of Facebook — pounced and decried the decisions as “shocking”, saying the FOB had “bent over backwards to excuse hate”.

Klonick said the reality is that the FOB is not Facebook’s supreme court — but rather it’s essentially just “a dispute resolution mechanism for users”.

If that assessment is true — and it sounds spot on, so long as you recall the fantastically tiny number of users who get to use it — the amount of PR Facebook has been able to generate off of something that should really just be a standard feature of its platform is truly incredible.

Klonick argued that the Board’s early reversals were the result of it hearing from users objecting to content takedowns — which had made it “sympathetic” to their complaints.

“Absolute frustration at not knowing specifically what rule was broken or how to avoid breaking the rule again or what they did to be able to get there or to be able to tell their side of the story,” she said, listing the kinds of things Board members had told her they were hearing from users who had petitioned for a review of a takedown decision against them.

“I think that what you’re seeing in the Board’s decision is, first and foremost, to try to build some of that back in,” she suggested. “Is that the signal that they’re sending back to Facebook — that’s it’s pretty low hanging fruit to be honest. Which is let people know the exact rule, given them a fact to fact type of analysis or application of the rule to the facts and give them that kind of read in to what they’re seeing and people will be happier with what’s going on.

“Or at least just feel a little bit more like there is a process and it’s not just this black box that’s censoring them.”

In his response to the committee’s query, Rusbridger discussed how he approaches review decision-making.

“In most judgements I begin by thinking well why would we restrict freedom of speech in this particular case — and that does get you into interesting questions,” he said, having earlier summed up his school of thought on speech as akin to the ‘fight bad speech with more speech’ Justice Brandeis type view.

“The right not to be offended has been engaged by one of the cases — as opposed to the borderline between being offended and being harmed,” he went on. “That issue has been argued about by political philosophers for a long time and it certainly will never be settled absolutely.

“But if you went along with establishing a right not to be offended that would have huge implications for the ability to discuss almost anything in the end. And yet there have been one or two cases where essentially Facebook, in taking something down, has invoked something like that.”

“Harm as oppose to offence is clearly something you would treat differently,” he added. “And we’re in the fortunate position of being able to hire in experts and seek advisors on the harm here.”

While Rusbridger didn’t sound troubled about the challenges and pitfalls facing the Board when it may have to set the “borderline” between offensive speech and harmful speech itself — being able to (further) outsource expertise presumably helps — he did raise a number of other operational concerns during the session. Including over the lack of technical expertise among current board members (who were purely Facebook’s picks).

Without technical expertise how can the Board ‘examine the algorithm’, as he suggested it would want to, because it won’t be able to understand Facebook’s content distribution machine in any meaningful way?

Since the Board currently lacks technical expertise, it does raise wider questions about its function — and whether its first learned cohort might not be played as useful idiots from Facebook’s self-interested perspective — by helping it gloss over and deflect deeper scrutiny of its algorithmic, money-minting choices.

If you don’t really understand how the Facebook machine functions, technically and economically, how can you conduct any kind of meaningful oversight at all? (Rusbridger evidently gets that — but is also content to wait and see how the process plays out. No doubt the intellectual exercise and insider view is fascinating. “So far I’m finding it highly absorbing,” as he admitted in his evidence opener.)

“People say to me you’re on that Board but it’s well known that the algorithms reward emotional content that polarises communities because that makes it more addictive. Well I don’t know if that’s true or not — and I think as a board we’re going to have to get to grips with that,” he went on to say. “Even if that takes many sessions with coders speaking very slowly so that we can understand what they’re saying.”

“I do think our responsibility will be to understand what these machines are — the machines that are going in rather than the machines that are moderating,” he added. “What their metrics are.”

Both witnesses raised another concern: That the kind of complex, nuanced moderation decisions the Board is making won’t be able to scale — suggesting they’re too specific to be able to generally inform AI-based moderation. Nor will they necessarily be able to be acted on by the staffed moderation system that Facebook currently operates (which gives its thousand of human moderators a fantastically tiny amount of thinking time per content decision).

Despite that the issue of Facebook’s vast scale vs the Board’s limited and Facebook-defined function — to fiddle at the margins of its content empire — was one overarching point that hung uneasily over the session, without being properly grappled with.

“I think your question about ‘is this easily communicated’ is a really good one that we’re wrestling with a bit,” Rusbridger said, conceding that he’d had to brain up on a whole bunch of unfamiliar “human rights protocols and norms from around the world” to feel qualified to rise to the demands of the review job.

Scaling that level of training to the tens of thousands of moderators Facebook currently employs to carry out content moderation would of course be eye-wateringly expensive. Nor is it on offer from Facebook. Instead it’s hand-picked a crack team of 40 very expensive and learned experts to tackle an infinitesimally smaller number of content decisions.

“I think it’s important that the decisions we come to are understandable by human moderators,” Rusbridger added. “Ideally they’re understandable by machines as well — and there is a tension there because sometimes you look at the facts of a case and you decide it in a particular way with reference to those three standards [Facebook’s community standard, Facebook’s values and “a human rights filter”]. But in the knowledge that that’s going to be quite a tall order for a machine to understand the nuance between that case and another case.

“But, you know, these are early days.”

Continue Reading

Uncategorized

Sequoia Capital India’s Surge invests $2M in sales engagement platform Outplay

Published

on

A Zoom screenshot showing members of Outplay's team on a video call

Outplay’s team members on a video call

Sales engagement platforms (SEP) help sales teams automate and track the large number of tasks they need to do each day as they contact leads and hone in on potential deals. Focused on small-to-medium-sized companies, SEP startup Outplay announced today it has raised $2 million from Sequoia Capital India’s Surge program for early-stage startups.

Outplay was founded in January 2020 by brothers Ram and Laxman Papineni and now counts more than 300 clients. Before launching Outplay, the Papineni brothers built AppVirality, a referall marketing tool for app developers.

Laxman told TechCrunch that Outplay’s customers come from sectors like IT, computer software, marketing and advertising and recruiting, and most are based in North America and Europe.

Outplay is designed for teams that use multiple channels to reach potential customers, including phone calls, text messages, email, live chats on websites, and social media platforms like LinkedIn or Twitter. It integrates with customer relationship management platforms like Salesforce and Pipedrive, giving sales people a new interface that includes productivity and automation tools to cut the time they spend on administrative tasks.

Screenshots of Outplay's sales engagement platform for automating sales tasks

Outplay’s platform

For example, Outplay can be used create sequences that send initial messages through different platforms, and then automatically follows up with new messages if there isn’t a reply within a pre-set time frame. Outplay also provides analytics to help sales people track how well sales campaigns are working.

Two of Outplay’s biggest competitors are Outreach and SalesLoft, both of which hit unicorn status in recent funding rounds. Laxman said Outplay is focused on ease of use, with other differentiators including more integrations with CRMs and other software, and a strong customer support team.

Continue Reading

Trending