Connect with us

Uncategorized

Jam collaborative software launches Jam Genies to give small startups access to experts

Published

on

As the world moves towards remote work, the collaborative tools market continues to expand. Jam, a platform for editing and improving your company’s website, is adding to the trend by introducing a new arm to its product today called Jam Genies.

Jam Genies is a network of highly experienced product experts that Jam users can tap for guidance and advice around their specific issue or challenge.

Cofounder Dani Grant explained to TechCrunch that many small and early-stage companies don’t have the deep pockets to hire a consultant when they run into a challenge, as many charge exorbitant rates and they often have a minimum time requirement. It can be incredibly difficult to get bite-sized advice at a reasonable cost.

That’s where Jam Genies comes in.

Genies hail from a variety of ‘verticals’, such as investors, designers, brand people, and growth hackers. The list includes:

  • Brianne Kimmel – Angel investor and founder of Worklife VC. Investor in Webflow, Hopin & 40+ software companies building the future of work.
  • Erik Torenberg – Partner at Village Global, a fund backed by Bill Gates, Jeff Bezos, Mark Zuckerberg and others. Founding team at Product Hunt.
  • Sahil Lavingia – Founder & CEO of Gumroad, first engineer at Pinterest, and angel investing $10 million a year via shl.vc.
  • Iheanyi Ekechukwu – Engineer turned angel investor, and scout investor for Kleiner Perkins.
  • Soleio – Facebook’s second product designer, former head of design at Dropbox, and advisor at Figma. Invests in design-focused founders at Combine.
  • Dara Oke – Product design lead at Netflix, formerly designed and built products at Microsoft, Twitter, and Intel.
  • Katie Suskin – Designed many products you know and love like Microsoft Calendar, OkCupid, Tia, and … Jam.
  • Julius Tarng – Helped scale design at Webflow and led design tooling at Facebook.
  • Abe Vizcarra – Currently leading brand at Fast, former Global Design Director at Snap Inc.
  • Tiffany Zhong – CEO, Zebra IQ. Recognized by Forbes as one of the Top 10 Gen Z Experts.
  • Nicole Obst – Former Head of Web Growth (B2C) at Dropbox and Head of Growth at Loom
  • James Sherrett – 9th employee at Slack, led the original marketing and sales of Slack.
  • Asher King Abramson – CEO at Got Users, a growth marketing platform widely used by startups around Silicon Valley.

Users on the Jam platform can choose a Genie and set an appointment through Calendly. The sessions last half an hour and cost a flat fee of $250, all of which goes to the Genie.

Jam raised $3.5 million in October, from firms like Union Square Ventures, Version One Ventures, BoxGroup, Village Global and a variety of angel investors, to fuel growth and further build out the product. Jam Genies is, in many respects, a growth initiative for the company to better acquaint early-stage startups with the platform.

The main Jam product lets groups of developers and designers work collaboratively on a website, leaving comments, discuss changes and create and assign tasks. The platform integrates with all the usual suspects, such as Jira, Trello, Github, Slack, Figma, and more.

Since its launch in October 2020, the company has signed up 4,000 customers for its private beta waitlist, with 14,000 Jam comments created on the platform. The introduction of Jam Genies could add momentum to this growth push.

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

What is an “algorithm”? It depends whom you ask

Published

on

Describing a decision-making system as an “algorithm” is often a way to deflect accountability for human decisions. For many, the term implies a set of rules based objectively on empirical evidence or data. It also suggests a system that is highly complex—perhaps so complex that a human would struggle to understand its inner workings or anticipate its behavior when deployed.

But is this characterization accurate? Not always.

For example, in late December Stanford Medical Center’s misallocation of covid-19 vaccines was blamed on a distribution “algorithm” that favored high-ranking administrators over frontline doctors. The hospital claimed to have consulted with ethicists to design its “very complex algorithm,” which a representative said “clearly didn’t work right,” as MIT Technology Review reported at the time. While many people interpreted the use of the term to mean that AI or machine learning was involved, the system was in fact a medical algorithm, which is functionally different. It was more akin to a very simple formula or decision tree designed by a human committee.

This disconnect highlights a growing issue. As predictive models proliferate, the public becomes more wary of their use in making critical decisions. But as policymakers begin to develop standards for assessing and auditing algorithms, they must first define the class of decision-making or decision support tools to which their policies will apply. Leaving the term “algorithm” open to interpretation could place some of the models with the biggest impact beyond the reach of policies designed to ensure that such systems don’t hurt people.

How to ID an algorithm

So is Stanford’s “algorithm” an algorithm? That depends how you define the term. While there’s no universally accepted definition, a common one comes from a 1971 textbook written by computer scientist Harold Stone, who states: “An algorithm is a set of rules that precisely define a sequence of operations.” This definition encompasses everything from recipes to complex neural networks: an audit policy based on it would be laughably broad.

In statistics and machine learning, we usually think of the algorithm as the set of instructions a computer executes to learn from data. In these fields, the resulting structured information is typically called a model. The information the computer learns from the data via the algorithm may look like “weights” by which to multiply each input factor, or it may be much more complicated. The complexity of the algorithm itself may also vary. And the impacts of these algorithms ultimately depend on the data to which they are applied and the context in which the resulting model is deployed. The same algorithm could have a net positive impact when applied in one context and a very different effect when applied in another.

In other domains, what’s described above as a model is itself called an algorithm. Though that’s confusing, under the broadest definition it is also accurate: models are rules (learned by the computer’s training algorithm instead of stated directly by humans) that define a sequence of operations. For example, last year in the UK, the media described the failure of an “algorithm” to assign fair scores to students who couldn’t sit for their exams because of covid-19. Surely, what these reports were discussing was the model—the set of instructions that translated inputs (a student’s past performance or a teacher’s evaluation) into outputs (a score).

What seems to have happened at Stanford is that humans—including ethicists—sat down and determined what series of operations the system should use to determine, on the basis of inputs such as an employee’s age and department, whether that person should be among the first to get a vaccine. From what we know, this sequence wasn’t based on an estimation procedure that optimized for some quantitative target. It was a set of normative decisions about how vaccines should be prioritized, formalized in the language of an algorithm. This approach qualifies as an algorithm in medical terminology and under the broad definition, even though the only intelligence involved was that of humans.

Focus on impact, not input

Lawmakers are also weighing in on what an algorithm is. Introduced in the US Congress in 2019, HR2291, or the Algorithmic Accountability Act, uses the term “automated decisionmaking system” and defines it as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.”

Similarly, New York City is considering Int 1894, a law that would introduce mandatory audits of “automated employment decision tools,” defined as “any system whose function is governed by statistical theory, or systems whose parameters are defined by such systems.” Notably, both bills mandate audits but provide only high-level guidelines on what an audit is.

As decision-makers in both government and industry create standards for algorithmic audits, disagreements about what counts as an algorithm are likely. Rather than trying to agree on a common definition of “algorithm” or a particular universal auditing technique, we suggest evaluating automated systems primarily based on their impact. By focusing on outcome rather than input, we avoid needless debates over technical complexity. What matters is the potential for harm, regardless of whether we’re discussing an algebraic formula or a deep neural network.

Impact is a critical assessment factor in other fields. It’s built into the classic DREAD framework in cybersecurity, which was first popularized by Microsoft in the early 2000s and is still used at some corporations. The “A” in DREAD asks threat assessors to quantify “affected users” by asking how many people would suffer the impact of an identified vulnerability. Impact assessments are also common in human rights and sustainability analyses, and we’ve seen some early developers of AI impact assessments create similar rubrics. For example, Canada’s Algorithmic Impact Assessment provides a score based on qualitative questions such as “Are clients in this line of business particularly vulnerable? (yes or no).”

What matters is the potential for harm, regardless of whether we’re discussing an algebraic formula or a deep neural network.

There are certainly difficulties to introducing a loosely defined term such as “impact” into any assessment. The DREAD framework was later supplemented or replaced by STRIDE, in part because of challenges with reconciling different beliefs about what threat modeling entails. Microsoft stopped using DREAD in 2008.

In the AI field, conferences and journals have already introduced impact statements with varying degrees of success and controversy. It’s far from foolproof: impact assessments that are purely formulaic can easily be gamed, while an overly vague definition can lead to arbitrary or impossibly lengthy assessments.

Still, it’s an important step forward. The term “algorithm,” however defined, shouldn’t be a shield to absolve the humans who designed and deployed any system of responsibility for the consequences of its use. This is why the public is increasingly demanding algorithmic accountability—and the concept of impact offers a useful common ground for different groups working to meet that demand.

Kristian Lum is an assistant research professor in the Computer and Information Science Department at the University of Pennsylvania.

Rumman Chowdhury is the director of the Machine Ethics, Transparency, and Accountability (META) team at Twitter. She was previously the CEO and founder of Parity, an algorithmic audit platform, and global lead for responsible AI at Accenture.

Continue Reading

Uncategorized

MyHeritage now lets you animate old family photos using deepfakery

Published

on

AI-enabled synthetic media is being used as a tool for manipulating real emotions and capturing user data by genealogy service, MyHeritage, which has just launched a new feature — called ‘deep nostalgia‘ — that lets users upload a photo of a person (or several people) to see individual faces animated by algorithm.

The Black Mirror-style pull of seeing long lost relatives — or famous people from another era — brought to a synthetic approximation of life, eyes swivelling, faces tilting as if they’re wondering why they’re stuck inside this useless digital photo frame, has led to an inexorable stream of social shares since it was unveiled yesterday at a family history conference… 

MyHeritage’s AI-powered viral marketing playbook with this deepfakery isn’t a complicated one: They’re going straight for tugging on your heart strings to grab data which can be used to drive sign ups for their other (paid) services. (Selling DNA tests is their main business.)

It’s free to animate a photo using the ‘deep nostalgia’ tech on MyHeritage’s site but you don’t get to see the result until you hand over at least an email (along with the photos you want animating, ofc) — and agree to its T&Cs and privacy policy. Both of which have attracted a number of concerns, over the years.

Last year, for example, the Norwegian Consumer Council reported MyHeritage to the national consumer protection and data authorities after a legal assessment of the T&Cs found the contract it asks customers to sign to be “incomprehensible”.

In 2018 MyHeritage also suffered a major data breach — and data from that breach was later found for sale on the dark web, among a wider cache of hacked account info pertaining to several other services.

The company — which, as we reported earlier this week, is being acquired by a US private equity firm for ~$600M — is doubtless relying on the deep pull of nostalgia to smooth over any individual misgivings about handing over data and agreeing to its terms.

The face animation technology itself is impressive enough — if you set aside the ethics of encouraging people to drag their long lost relatives into the uncanny valley to help MyHeritage cross-sell DNA testing (with all the massive privacy considerations around putting that kind of data in the hands of a commercial entity).

Looking at the inquisitive face of my great grandmother I do have to wonder what she would have made of all this?

The facial animation feature is powered by Israeli company D-ID, a TechCrunch Disrupt battlefield alum — which started out building tech to digital de-identify faces with an eye on protecting image and video from being identifiable by facial recognition algorithms.

It released a demo video of the photo-animating technology last year. The tech uses a driver video to animate the photo — mapping the facial features of the photo onto that base driver to create a ‘live portrait’, as D-ID calls it.

“The Live Portrait solution brings still photos to life. The photo is mapped and then animated by a driver video, causing the subject to move its head and facial features, mimicking the motions of the driver video,” D-ID said in a press release. “This technology can be implemented by historical organizations, museums, and educational programs to animate well-known figures.”

It’s offering live portraits as part of a wider ‘AI Face’ platform which will offer third parties access to other deep learning, computer vision and image processing technologies. D-ID bills the platform as a ‘one-stop shop’ for syntheized video creation.

Other tools include a ‘face anonymization’ feature which replaces one person’s face on video with another’s (such as for documentary film makers to protect a whistleblower’s identity); and a ‘talking heads’ feature that can be used for lip syncing or to replace the need to pay actors to appear in content such as marketing videos as it can turn an audio track into a video of a person appearing to speak those words.

The age of synthesized media is going to be a weird one, that’s for sure.

 

Continue Reading

Uncategorized

An AI is training counselors to deal with teens in crisis

Published

on

Counselors volunteering at the Trevor Project need to be prepared for their first conversation with an LGBTQ teen who may be thinking about suicide. So first, they practice. One of the ways they do it is by talking to fictional personas like “Riley,” a 16-year-old from North Carolina who is feeling a bit down and depressed. With a team member playing Riley’s part, trainees can drill into what’s happening: they can uncover that the teen is anxious about coming out to family, recently told friends and it didn’t go well, and has experienced suicidal thoughts before, if not at the moment.

Now, though, Riley isn’t being played by a Trevor Project employee but is instead being powered by AI.

Just like the original persona, this version of Riley—trained on thousands of past transcripts of role-plays between counselors and the organization’s staff—still needs to be coaxed a bit to open up, laying out a situation that can test what trainees have learned about the best ways to help LGBTQ teens. 

Counselors aren’t supposed to pressure Riley to come out. The goal, instead, is to validate Riley’s feelings and, if needed, help develop a plan for staying safe. 

Crisis hotlines and chat services make them a fundamental promise: reach out, and we’ll connect you with a real human who can help. But the need can outpace the capacity of even the most successful services. The Trevor Project believes that 1.8 million LGBTQ youth in America seriously consider suicide each year. The existing 600 counselors for its chat-based services can’t handle that need. That’s why the group—like an increasing number of mental health organizations—turned to AI-powered tools to help meet demand. It’s a development that makes a lot of sense, while simultaneously raising questions about how well current AI technology can perform in situations where the lives of vulnerable people are at stake. 

Taking risks—and assessing them

The Trevor Project believes it understands this balance—and stresses what Riley doesn’t do. 

“We didn’t set out to and are not setting out to design an AI system that will take the place of a counselor, or that will directly interact with a person who might be in crisis,” says Dan Fichter, the organization’s head of AI and engineering. This human connection is important in all mental health services, but it might be especially important for the people the Trevor Project serves. According to the organization’s own research in 2019, LGBTQ youth with at least one accepting adult in their life were 40% less likely to report a suicide attempt in the previous year. 

The AI-powered training role-play, called the crisis contact simulator and supported by money and engineering help from Google, is the second project the organization has developed this way: it also uses a machine-learning algorithm to help determine who’s at highest risk of danger. (It trialed several other approaches, including many that didn’t use AI, but the algorithm simply gave the most accurate predictions for who was experiencing the most urgent need.)

AI-powered risk assessment isn’t new to suicide prevention services: the Department of Veterans Affairs also uses machine learning to identify at-risk veterans in its clinical practices, as the New York Times reported late last year. 

Opinions vary on the usefulness, accuracy, and risk of using AI in this way. In specific environments, AI can be more accurate than humans in assessing people’s suicide risk, argues Thomas Joiner, a psychology professor at Florida State University who studies suicidal behavior. In the real world, with more variables, AI seems to perform about as well as humans. What it can do, however, is assess more people at a faster rate. 

Thus, it’s best used to help human counselors, not replace them. The Trevor Project still relies on humans to perform full risk assessments on young people who use its services. And after counselors finish their role-plays with Riley, those transcripts are reviewed by a human. 

How the system works

The crisis contact simulator was developed because doing role-plays takes up a lot of staff time and is limited to normal working hours, even though a majority of counselors plan on volunteering during night and weekend shifts. But even if the aim was to train more counselors faster, and better accommodate volunteer schedules, efficiency wasn’t the only ambition. The developers still wanted the role-play to feel natural, and for the chatbot to nimbly adapt to a volunteers’ mistakes. Natural-language-processing algorithms, which had recently gotten really good at mimicking human conversations, seemed like a good fit for the challenge. After testing two options, the Trevor Project settled on OpenAI’s GPT-2 algorithm.

The chatbot uses GPT-2 for its baseline conversational abilities. That model is trained on 45 million pages from the web, which teaches it the basic structure and grammar of the English language. The Trevor Project then trained it further on all the transcripts of previous Riley role-play conversations, which gave the bot the materials it needed to mimic the persona.

Throughout the development process, the team was surprised by how well the chatbot performed. There is no database storing details of Riley’s bio, yet the chatbot stayed consistent because every transcript reflects the same storyline.

But there are also trade-offs to using AI, especially in sensitive contexts with vulnerable communities. GPT-2, and other natural-language algorithms like it, are known to embed deeply racist, sexist, and homophobic ideas. More than one chatbot has been led disastrously astray this way, the most recent being a South Korean chatbot called Lee Luda that had the persona of a 20-year-old university student. After quickly gaining popularity and interacting with more and more users, it began using slurs to describe the queer and disabled communities.

The Trevor Project is aware of this and designed ways to limit the potential for trouble. While Lee Luda was meant to converse with users about anything, Riley is very narrowly focused. Volunteers won’t deviate too far from the conversations it has been trained on, which minimizes the chances of unpredictable behavior.

This also makes it easier to comprehensively test the chatbot, which the Trevor Project says it is doing. “These use cases that are highly specialized and well-defined, and designed inclusively, don’t pose a very high risk,” says Nenad Tomasev, a researcher at DeepMind.

Human to human

This isn’t the first time the mental health field has tried to tap into AI’s potential to provide inclusive, ethical assistance without hurting the people it’s designed to help. Researchers have developed promising ways of detecting depression from a combination of visual and auditory signals. Therapy “bots,” while not equivalent to a human professional, are being pitched as alternatives for those who can’t access a therapist or are uncomfortable  confiding in a person. 

Each of these developments, and others like it, require thinking about how much agency AI tools should have when it comes to treating vulnerable people. And the consensus seems to be that at this point the technology isn’t really suited to replacing human help. 

Still, Joiner, the psychology professor, says this could change over time. While replacing human counselors with AI copies is currently a bad idea, “that doesn’t mean that it’s a constraint that’s permanent,” he says. People, “have artificial friendships and relationships” with AI services already. As long as people aren’t being tricked into thinking they are having a discussion with a human when they are talking to an AI, he says, it could be a possibility down the line. 

In the meantime, Riley will never face the youths who actually text in to the Trevor Project: it will only ever serve as a training tool for volunteers. “The human-to-human connection between our counselors and the people who reach out to us is essential to everything that we do,” says Kendra Gaunt, the group’s data and AI product lead. “I think that makes us really unique, and something that I don’t think any of us want to replace or change.”

Continue Reading

Trending