31 12 / 2011

In November of 2011 I launched Couptivate, a digital wallet for daily deal purchases.  You can check out my new blogging home at couptivate.wordpress.com, where I write about all things daily deal.

See you there!

David

16 11 / 2010

Mark Zuckerberg took to the stage in SF yesterday to announce an innovative new messaging platform that will be rolling out for Facebook users.  The gist of this new platform is that users should not have to think about which method of electronic communication to use when speaking to others.  Instead, all messages (email, SMS, IM etc) should reside in a single inbox (and appear as a single conversation), and all outgoing communication should be routed through the appropriate channel without users having to think about it.  In many ways this paradigm strikes me as similar to that of Google TV (which I previously discussed here ).  As technology advances, the priority has become simplicity.  As one author put it:


Any sufficiently advanced technology is indistinguishable from magic.

Technologies, whether we’re talking about video or electronic communication, should just do, without requiring the user to deal with extraneous logistical issues or complex internal systems (this philosophy continues to be most successfully executed by Apple). 

In any event, whether Zuck’s new messaging features follow through on this goal remains to be seen (I’m hoping to try it out shortly, [thanks Mark T, for the invite]).  What I find more interesting is Facebook’s promising use of its massive social graph to create a super relevance filter that sorts messages based upon your relationship with the sender and achieves a much higher signal-to-noise ratio than we’ve seen thus far.  Your first instinct may be to question whether your Facebook network of connections provides enough data to create an effective filter.  If you have just a few hundred Facebook contacts, won’t this filter potentially wind up cutting you off from the millions who you might not have a connection to?  What seems to be overlooked is that Facebook does not provide users with any social graph context (unlike LinkedIn, which tells users if they’re connected to others, out to the third degree).  We have no idea how many individuals exist in our wider Facebook network.  Based on the numbers that LinkedIn provides me on my own connections, I imagine that my Facebook social graph is massively larger, by a few magnitudes.  Essentially, anyone I communicate with is likely to be somewhere out in the web of my wider network.  By leveraging this data properly, Facebook should be able to construct a highly effective filter, leading to a messaging platform that wastes less of our time and appears closer and closer to the aforementioned magic.

10 11 / 2010

Earlier in the week I had an opportunity to hear Stanford Law Professor Barbara van Schewick, a noted net-neutrality scholar, present an interesting and unique framework for net-neutrality regulation.  A little background:  net-neutrality is a concept that argues against restrictions on Internet usage, both with respect to applications and users, by Internet service providers.  Its adherents believe that the Internet has become a critical channel of communication and commerce and any actions that could reduce innovation, distort competition or otherwise subvert the growth and beneficial development of Internet activity should be prohibited.  Because Internet service providers may have competitive incentives to place restrictions on Internet usage (e.g., an ISP that also provides traditional telephony services may be incentivized to restrict an Internet user from operating a VOIP product a la Skype), the net-neutrality debate is concerned with the regulatory regime utilized by the FCC (among others)  with respect to the ISPs.

It’s generally agreed that “blocking” (i.e. preventing certain types of users or applications from using the Internet altogether) by an ISP should be flatly prohibited.  Instead, the discussion centers on “discrimination”.  Namely, can ISPs treat different users or types of applications differently, by, for example, slowing down transmission speeds or making the transmissions less precise (i.e. allowing greater packet loss)?  This question is more nuanced than it first appears, because there are a number of valid reasons why certain types of discrimination should be permitted.  For instance, certain types of applications function better with certain types of Internet service.  As professor Schewick points out in her presentation, an Internet telephony application needs a fast connection, but is less dependent on a precise transmission to function.  Conversely, an email application needs a precise transmission (think missing words from the email text), but does not require speed.  The obvious argument here is that a regulatory framework that did not allow for any discrimination could actually wind hurting consumers and innovators by limiting the ISPs’ ability to offer them tailored services. 

Unregulated discrimination by ISPs, however, has the same practical effect as blocking.  If an ISP decided to discriminate against a VOIP application, for instance, by restricting transmission speeds, the application would function poorly and no one would use it.  Since end-users do not typically distinguish between a poor product and issues with the underlying network, an ISP could hobble an application’s market viability through discriminatory steps and cause it to die off.

One solution is to allow ISP’s to discriminate, but to restrict them from discriminating within categories of applications. An ISP could not, for instance, offer different services to Orbitz.com and Kayak.com.  In this way, the ISPs would be prevented from choosing the winners and losers with respect to specific types of applications.  It remains to be seen, however, whether it’s possible to precisely define categories, or whether certain applications fit neatly into single categories.  Given the ambiguity in category definition and the ability of ISPs to frame these definitions, this regulatory regime is also less than optimal.

Instead, Schewick proposes that we allow discrimination, but place this power solely in the hands of users, rather than ISPs.  In this way, end-user choice is maintained without giving ISPs the ability to manipulate competition and bestow competitive advantage (or disadvantage) on certain applications or users.  In this framework, all applications would have access to the same services on the same terms, but could choose individually whether to utilize a lesser service (for reasons of cost or otherwise).

There are logistical questions left open in Schewick’s framework.  Namely, in shifting the power from the ISPs to the users, there are a significantly greater number of actors involved in the process, such that barriers to collective action and organization may arise.  Nevertheless, Schewick’s proposal strikes me as a reasonable balance between competing, zero-sum principles, and is a “best of both worlds” approach that deserves serious consideration in the ongoing net-neutrality policy debate.

image

25 10 / 2010

This month’s NY Tech Meetup was packed with demos, as the group and the event continue to grow at a rapid clip.  This month’s presenters were:

image

 image

image

image

image

Catchafire, which provides volunteer matching tools for non-profits and their potential hires, had a particularly interesting business model.  Any time a company is able to achieve potentially robust profitability while delivering social good, it’s worth taking a close look.  In the case of Catchafire, each volunteer opportunity is analyzed to determine an estimated cost savings for the non-profit organization.  While Catchafire charges their non-profit clients a not-insignificant commission for each successful match, the value of the volunteer opportunity far outweighs this fee, making Catchafire an easy sell to potential clients.  With the twin benefits of profitability and positive PR, I have a feeling Catchafire will become a sought after portfolio company for many a VC.

Introspectr also seems like a great product (though I haven’t had an opportunity to use it yet).  As we consume and receive information from more and more platforms (email, facebook, twitter, etc.), it becomes harder to recall information of interest.  Introspectr seeks to link all of your accounts and allow for text searches from one convenient place.  In addition, they go a few levels further by indexing things such as the text contents of email attachments (in Word and other formats).  The solution provided by their demo immediately resonated with me and I hope to try their product soon.

image

20 10 / 2010

Last week I had the opportunity to hear Jason Finger, founder of SeamlessWeb, speak at the Startup@Work Founders Speakers Series.  As an an attorney with a strong interest in entrepreneurship, I had been curious to hear about Jason’s path from corporate lawyer to founder.  As it turns out, Jason’s legal career was the detour, sandwiched in between a lifelong focus on entrepreneurship. 

What was really interesting about Jason’s legal experience was the tangential, yet essential, impact it had in leading him to the idea for SeamlessWeb.  Stuck late at work at his law firm during the Thanksgiving holiday, Jason struggled to find any open restaurants that would deliver him dinner.  After calling friends who worked in the area and attempting to get menus faxed to him, he decided there had to be a better way.  The rest is history.  It’s unclear that Jason’s legal training played a role in his idea for SeamlessWeb (although, as he notes, he drew on this training extensively once the company was up and running), but it seems unlikely that he would have otherwise had the experiences that ultimately led to his inspiration.

This got me thinking about some of Steven Johnson’s theses (summarized above) on the origin of great ideas.  One thing Steven found, is that great ideas often arise from the collision of two (or more) hunches held by separate people.  But what if an individual could simulate this effect internally by being multidisciplinary?  In Jason’s case, by being an entrepreneur, attorney/MBA and businessperson, he created his own internal collision to innovate SeamlessWeb.

24 9 / 2010

Two weeks ago (sorry for the delay) I attended the monthly NY Tech Meetup held at NYU.  Presenting this month were:

image

image

image

image

image

image

Often a good demo can spark interest and a willingness to try a product, but it can almost never replace hands on experience.  This was the case with Apture, a web publishing tool and browser add-on that attempts to increase user engagement   by making the browsing experience more efficient.  Apture allows searching of any text via a pop-up, in-window box that can be dragged around the screen.  In addition, Apture intelligently pulls images, addresses and bit.ly-type links so that users are able to draw down on some particular piece of content dynamically, while remaining on the original site.  The ultimate result is that site visitors spend more time on the original site, rather than opening various windows, becoming distracted, and abandoning the original content.  A smart idea in theory, though critically dependent on execution and UX.   And, after using the browser add-on for a week, I am a fan of Apture’s product.  It makes the browsing experience both more seamless and efficient, and I find that I am using Apture’s features intuitively, without having to actively decide whether to do a separate Google search or otherwise.  It’s interesting that at a time when the mantra of “content is king” seems to pervade the industry’s thinking in terms of user engagement, Apture has approached the challenge from the reverse perspective.  In the Apture worldview, the problem to be tackled is not the content on your own site, but rather the related content on another site that threatens to prey on a user’s distractability and confusion.

I was also struck by Grovo.  Having seen them demo twice, I don’t quite get their product.  Grovo provides short, self-created videos to teach users the ins and outs of popular web services, such as Twitter, Facebook and Craigslist.  Ever seen those infomercials on TV for the instructional DVDs on various, basic computer programs?  Grovo is essentially seeking to replicate these products for the web.  Perhaps I’m misunderstanding their target audience, but I keep wondering whether individuals who are using web services will have the attention span or time to sit and passively watch informational videos.  Older users and enterprises may be interested in training videos, but I question whether these users are self-selecting, such that those who are using (or want to use) web services sufficiently understand these products without watching an instructional video.  There may also be some cognitive dissonance on my part here: it just seems odd to me that users would rely on a traditional, passive tool (an instructional video) to learn how to use these constantly-evolving and innovating web services.

image

20 9 / 2010

Last month I had discussed the questionable headline and analysis in Wired’s September cover article.  Today David Pogue of the New York Times echoes my conclusions about this article, and in particular the rather obvious mistakes the author makes in interpreting the gigantic graph that sits atop the article and forms the basis for his conclusions.  I used to really enjoy reading Wired magazine.  Now, the magazine seems compelled to frame their articles in sensationalist terms, even if inaccurate. 

04 9 / 2010

I’ve been doing some thinking recently about the state of business development deals in the New York technology start up world.  Personalization is at a premium, but how best to personalize?  There seem to be two broad categories right now:  Those who need user data to improve relevance and success, and those who have created a successful product which does not itself drive profits but ultimately allows for monetization through user-data aggregation.  There’s a lot of potential for deal making between these groups, particularly in the three cases below.

1) Hunch, Perpetually and Relevance

Hunch has burst onto the NY tech scene seeking to build a “taste graph” of the internet by having users answer penetrating questions about a wide variety of disparate topics, from food preference to personality traits to knowledge about currency exchange rates.  The goal is to match each user to any entity on the Internet, whether that means a website, a particular product or a personality to follow on Twitter, and to provide recommendations to the user based upon this matching process.

Hunch’s questions are probing and illuminating:  They do not simply ask for a user’s basic, literal preferences for A over B.  Rather, they attempt to get to some deeper element of the user’s personality and then extrapolate based upon personality traits they uncover.  I’ve also been impressed with their ability to make the activity of answering these questions fun, which is due, in large part, to the clever nature of the questions themselves and the idea that the user learns things about themselves in the process.  But what about the actual recommendations?  Their relevance to the user would seem to be the whole point of using Hunch, and key to the site’s success.  I found my recommendations to be relevant in most cases, but also unsurprising.  For example, I already know I’m a Mac guy and a viewer of the Daily Show.  Relevant recommendations? Yes.  Helpful recommendations?  Debatable.  

I began to think about ways in which Hunch could dig even deeper into a user’s tastes, to suggest something they might not have considered before.  In the process, I kept coming back to Pandora, the gold standard (in my mind) of content recommendation.  Pandora consistently introduces me to new music I’ve never heard before that I really like.  By parsing a given song down to 400 attributes, Pandora is able to achieve incredible accuracy in extrapolating from a user’s past musical preferences to suggest new songs and artists.  The obvious impediment to simply recreating this process for Hunch is that web content, due to its almost infinite diversity, is not susceptible to the same level of almost mathematical analysis and categorization that music is.  The Music Genome Project (the brains behind Pandora) could not simply be tweaked to work for non-music content.  Nevertheless, there should be ways for Hunch to leverage data beyond simply question and answer.

Enter Perpetually.  Perpetually (which I had previously written about here) is pioneering the use of visual web analytics by archiving every iteration of a given site and then comparing analytics data to determine how visual changes (and content changes, to the extent they are implied by visual changes) impact a user’s behavior on that site.  For instance, Perpetually measures how placement of a link on a given site impacts how much this link is clicked on.  Currently, such data is derived by comparing behavior between different users.  But there is no reason why an individual user’s behavior could not be compared, either with previous visits to the same site or with visits across different sites.  A user’s behavior on a given site provides insight on, among other things, their aesthetic preferences, but also their content preferences (e.g., do they prefer sites that provide links to other sites or lots of photos).  A partnership between Hunch and Perpetually would provide a wealth of more subtle user data to Hunch upon which to generate more impactful recommendations.  By integrating Perpetually’s service into Hunch, a Hunch user could opt in to either importing data previously collected by Perpetually or agree to have certain elements of their future web surfing measured by Perpetually’s technology.  Hunch, in turn, could use this data to offer better matching and non-obvious recommendations on a level closer to that of Pandora.  Certainly privacy is a concern here, but given Hunch’s current success in getting users to answer highly personal questions about themselves, I think they would be able to present and implement Perpetually’s features in an appropriate and tactful fashion.

2) SharesPost and its approach to Transaction Advisors

SharesPost seeks to create liquidity in the market for private company stock by matching buyers and sellers and providing a platform upon which to consummate purchase and sale transactions in an organized, efficient fashion.  By making purchase and sale offers public, SharesPost hopes to create more efficient pricing for private company stock and more opportunities for buyers and sellers to undertake transactions.  Many, if not most, parties to such transactions will require legal and financial valuation services.  These services are a natural partner for SharesPost.  Besides providing referrals, the accessibility of legal and financial advisors within the SharesPost platform is key to maximizing the successful consummation of transactions.  Ready access to professionals will eliminate hesitation on the part of otherwise interested participants who might be intimidated by acting without expert assistance.  How should SharesPost approach these partnerships?

Competitor SecondMarket has created an “ecosystem” where specific legal and financial advisors are recommended to provide transaction services.  There are a few problems with this approach.  Firstly, there is less price competition and efficiency.  The recommended providers are small in number and face less competition from one another in the fees they charge.  Granted, users are free to utilize a provider outside of the ecosystem, but the fact remains that SecondMarket has created a system of “preferred providers”.  These providers have a distinct advantage over outsiders. Secondly, with only a few practitioners listed, conflicts of interests may arise between buyers and sellers, as well as between SecondMarket and the advisors themselves.  For instance, a financial advisor might feel pressure to advise on proceeding with a given transaction in order to generate transactional revenue for SecondMarket and remain one of it’s listed, preferred advisors.  

Instead, SharesPost should be seeking partnerships with companies such as Legal River and Covestor, which offer a much larger universe of professionals.  This would ensure independence between the advisors and SharesPost, create greater price competition and allow parties to take advantage of the feedback systems available on these sites.

3)Gilt, Swipely and Better Data

Gilt Groupe has been a trailblazer in the online sample sale space.  One area for improvement, however, is Gilt’s lack of personalization.  Other deal sites, like Yipit and LivingSocial, tailor deals to the user’s taste, and this appears to be a major front of competition.  Of course, in order to personalize their sales, Gilt needs user data.  And while Gilt already has data on what it’s current members have purchased, there’s a limit to the effectiveness of this data:  Any user that has made enough purchases on Gilt to provide sufficient data on his or her preferences is already an avid Gilt purchaser and this audience thus does not offer high growth prospects.  Rather, leveraging current user data is a good way to foster loyalty and continued purchasing.  Instead, Gilt should leverage the data being collected by a service such as Swipely or Blippy, which tap directly into their user’s credit card data in order to tailor sales (and pursue relationships with certain brands) more effectively.

Jetsetter, a spin-off, Gilt-like service for vacation products, could similarly tailor its deals to users by tapping in to the right data source.  In Jetsetter’s case, a deal with TripIt, which aggregates user’s travel data in the process of offering an organizational tool for the various elements of a vacation, would make a lot of sense.

image

27 8 / 2010

Fred Wilson has an interesting discussion on AVC today about angel (very early stage investors in tech startups) exits and Etsy’s recent financing round.  M&A continues to be a an exit for VC investment, but with the lack of success in the IPO market, VC investment has been outstripping return of capital over the past decade or so.  Early stage angel investors, and even employees, have also been in need of liquidity opportunities of late.  The recent Etsy financing is an example of an emerging third route to exit (particularly for angels): the secondary sale to a private equity fund, hedge fund, or even VC fund.  In Etsy’s case, the founders themselves did not participate in this financing round.  Rather, it was the early stage angel investors who cashed out in return for VC investments.  As Fred points out, such secondary transactions can provide liquidity for angels on a much shorter time frame than a sale of the company or an IPO.  In addition to this “emerging third way” to liquidity, the emergence of platforms such as SecondMarket and SharePost should also provide greater liquidity (not just for the employees, but at all stages and levels of the capital structure) as they mature and increase in traffic.  Welcome events certainly for both investors and those seeking to fund raise right now, as liquidity paths will make future investments more attractive and also allow investors to recycle their investment dollars more rapidly and put them back into the startup market.

23 8 / 2010

Nintendo and its DS line of portable gaming systems has been feeling Apple’s heat lately with Apple’s heavy emphasis on gaming in the App Store.  Recent sales figures show that the iPhone has likely been eating into Nintendo DS sales.  Could Apple be planning to expand this competition to the living room?

With rumors swirling around a revamped Apple TV (allegedly called the iTV) that will be integrated with the iPad and iPhone, run on the iOS and allow access to the App Store, I’ve been wondering whether the inclusion of an advanced gyroscope in the iPhone 4 has far more significance than we currently realize.  Pair the gyroscope-enabled iPhone 4 with the iTV and games designed for the television that utilize motion control, and suddenly Apple has a competing product to the Wii (as well as the new motion control systems being pushed out by Sony and Microsoft).  If the lower price points in the App Store are maintained, this could cause more headaches for the video game console industry.