Archive for the ‘Internet’ Category

Latest Internet News

Posted: December 8, 2010 in Internet

SEA flocking to social networks

Internet users in Southeast Asia are flocking to social networking sites, driving up year-on-year penetration rates that are higher than the global average, according to new statistics from ComScore.

In a Web conference Wednesday, the market researcher discussed the state of Internet usage in the region from a survey it conducted in January 2011, covering six countries: Singapore, Hong Kong, Malaysia, Indonesia, Vietnam and the Philippines.

According to Joe Nguyen, ComScore’s Southeast Asia vice president, Vietnam saw the highest year-on-year growth at 35 percent, where social network penetration in the country climbed to 66 percent last year, from 49 percent in 2009. Its penetration rate, however, was still comparatively lower than other countries in the region which all clocked adoption rates that were higher than the global average of 70 percent, Nguyen noted.

Social network penetration in the Philippines stood at 95 percent, while Malaysia was at 91 percent, Indonesia at 90 percent, Singapore at 82 percent, and Hong Kong clocked in at 76 percent.

The analyst also noted that the high adoption rates in Philippines, Malaysia and Indonesia were “almost exclusively” driven by Facebook.

Facebook has seen strong success in other parts of Asia, where an earlier ComScore report in August pointed to Facebook as the most visited social networking site in India.

Nguyen said the Philippines has emerged as the world’s leading market for the social networking behemoth. According to ComScore estimates, Facebook currently enjoys a 93.7 percent penetration rate in the country, knocking off search engine Google as the number one Web property in the Philippines.

Nguyen added that Filipinos spent an average 7 hours per month on Facebook. Malaysia, at 88.4 percent, and Indonesia at 87.4 percent, also ranked among the world’s top 10 Facebook markets.

In addition, globally, Indonesia, the Philippines and Singapore were among the top 15 markets for Twitter. Some 21 percent of Indonesian Web users visited the microblogging site in January 2011, making it the fourth market worldwide with the highest Twitter reach. Philippines, at 13.8 percent, and Singapore at 13.6 percent took eighth and ninth place, respectively.

ComScore observed that worldwide usage of social networking sites grew from 67.6 percent to 70.5 percent between 2009 and 2010. Social networks also saw the biggest gains in terms of the amount of time online users spent on the sites, increasing 11.9 percent in 2009 to 16 percent in 2010.

On the flipside, e-mail and instant messaging (IM) fell year-on-year in terms of worldwide reach, noted Nguyen. E-mail use slipped from 65.8 percent in 2009 to 61.3 percent in 2010, but IM saw a more “dramatic” slide from 41.1 percent in 2009 to 34.7 percent past year, he said. Both e-mail and IM also fell in terms of time spent online.

Photo-sharing on the rise
The increased use of social networks, however, was overshadowed by the photos category, he said, which saw the fastest growth, from 42.8 percent in 2009 to 52.7 percent in 2010.

Southeast Asia also saw similar growth rates in this category where Vietnam again led the pack, jumping 73 percent from a 49 percent penetration rate in 2009 to 66 percent last year. Malaysia came in second with a 47 percent growth rate, followed by the Philippines at 46 percent, Hong Kong at 23 percent, Singapore at 17 percent, and Indonesia at 16 percent.

Nguyen observed that photo-sharing has become a “key component of the social networking experience”, noting that the high growth was evident in every Southeast Asian country, driven by photo-sharing among Facebook users, where users would upload and tag photos of themselves and their friends, though, he also referred to the popularity of other photo-sharing sites such as Flickr.

The ubiquity of digital cameras as well as cameras on mobile phones, as well as the large youth populations in these markets also contributed to growth in this category, he added.

The ComScore survey also assessed the region’s use of other Web sites including online retail, online banking, multimedia sites, travel sites, online video, search, news and blogs.

For instance, visits to travel sites, online retail stores, multimedia sites and online banking sites increased across the six Southeast Asian countries. Only Hong Kong and Singapore showed relatively flat growth for online retail visits, Nguyen said.

The analyst added that visits to travel sites saw significant growth due to the region’s rapidly increasing user confidence in using the Internet to research and book flights. The growing number of low-cost or budget carriers servicing the region also contributed to the increase, since more people can now afford to travel and are willing to spend money online to book flights on low-cost airlines, he said.

Sony mulls plan similar to Facebook-Warner deal

A Sony Pictures executive said Wednesday that the film studio is “looking into” new Web distribution methods similar to the one announced previously by rival Warner Bros. Studios.

Warner Bros.’ home-entertainment division said Monday that it will start renting movies via Facebook, starting with the 2008 hit “Batman: The Dark Knight”. The offer is among the first attempts by a major media company to test the social network’s potential as a distribution hub.

Speaking on a panel during the Media Summit conference in New York, John Calkins, executive vice president of Sony Pictures’ digital division, said distributing films through Facebook is a “great first step” in testing the power of social networks to sell films.

“Our view is that Facebook is certainly a viable pool for people interested in media content,” Calkins told the audience. “If you can have fans do the marketing, that’s a great idea…we’re looking at things like that.”

The reaction to the Warner Bros. announcement was similar to that in the forehead-slapping “could’ve had a V8” commercials. To some observers, the idea of trying to sell films to Facebook’s 600 million worldwide monthly users is so obvious and logical that one of the big questions raised is why it hadn’t been tried before.

The deal even inspired some pundits and Wall Street investors to question whether Facebook could threaten Web-video services from Netflix, Amazon, and Apple.

Calkins seemed hesitant to put Facebook in Netflix’s league just yet, as did some of his other panel members. Some on the panel, which included executives from Yahoo and MTV, said that they doubted the studios would risk distributing US$100 million films on a social network for a long time to come. One person called Facebook a “backstop” to more traditional distribution means.

John Penney, executive vice president for strategy at Starz, the pay TV channel that was one of the first major movie distributors to license content for Netflix’s streaming content, said he sees the studios renting and selling movies through Facebook and other Internet services following the initial period when DVD sales are hottest.

“For the post-DVD window, that’s where these new platforms [such as social networking] are becoming interesting,” Penney said. “Batman is a library product. These will be mechanisms to offset the drop in DVD sales.”

Sony’s Calkins got the audience laughing by responding: “Why so negative?”

In truth, the six major film studios, Warner Bros., 20th Century Fox, Disney, Sony, Paramount, and Universal are sensitive about all the media attention around waning interest in discs.

Last week, some studio executives met with CNET and they conceded that DVD sales are in decline. That only tells part of the story, they said. There are lots of challenges facing them as they try to navigate the transition from DVDs to Internet distribution. This isn’t an easy time for them. It’s a walk on a razor’s edge trying to give consumers the wide access to content and lower prices they now demand while maintaining healthy revenue and profits.

Consider that the music industry has been fighting the same Internet battle for much longer than the film sector, for over a decade now, and the results are mixed at best. Nonetheless, the studio managers I spoke with said they believe they can find profitable Internet-distribution models but it will take time.

At Wednesday’s conference, Calkins joked that he attended on the condition that he be allowed to talk about UltraViolet. That’s the technology standard that all the major studios are backing save for Disney.

By creating standards, such as common file formats, for participating consumer-electronics manufacturers and distribution services to follow, the studios intend to provide consumers with a means to play their films across a wide range of devices and services, just like the DVD does today. The No.1 priority is to entice film fans to start buying and collecting movies again.

Sony is one of the studios leading the UV charge. “You’ll have seamless access to your content,” Calkins said. “You can make the argument that it’s better than the DVD because you can watch your movies from a hotel in Taipei and you didn’t have to travel with your discs.”

Hopes are high for UV, but the studios aren’t marrying themselves to any one Internet distribution service.

They still plan to directly pipe movies into homes while the films are still in theaters with Premium Video on Demand (PVOD). They will distribute through the Internet services of the cable companies, the Xbox, and the PlayStation. They will sell and rent films through services such as Netflix, Amazon, Apple, Vudu, and yes, Facebook.

The fact that Warner Bros. was the first to stick a toe into Facebook waters wasn’t an accident. The studio is among the most aggressive of the big six to test and adopt new technologies. In 2006, the studio was the first to sign a distribution deal with the creators of the BitTorrent software. Last year, Warner Bros. struck a groundbreaking agreement with Netflix that gave the Web’s top video-rental service access to more streaming content in exchange for a 28-day moratorium on rentals of new releases. The move was designed to protect Warner Bros. DVD sales.

More recently, the studio began making some catalog titles available as iPad and iPhone apps.

Thomas Gewecke, president of Warner’s digital distribution and the man who approved the Facebook project, is a big booster for making content available online. Some of his ideas were likely shaped by his experiences at Sony Music, where he worked before moving to Warner Bros., in 2007.

During a speech he made in 2008, Gewecke said that when he started in music, CD sales were healthy and piracy wasn’t a factor.” He added: “We know how that changed.”

Google claims better Web video with new VP8

Google’s VP8 technology for encoding Web video just got a notch better at creating video, the Net giant says, and another round of improvements are set for a sequel due next quarter.

Wednesday Google released its “Bali” version of VP8 software then announced a new Cayuga version set to ship late in the second quarter of 2011. The software doesn’t change the VP8 technology, a codec that defines a method of encoding and decoding video, but works faster and does a better job than the preceding public version of VP8, called Aylesbury and released in November.

When encoding video with VP8‘s best quality setting on a computer with an x86 processor, “Bali runs 4.5x as fast than our initial release and 1.35x faster than Aylesbury,” said John Luther, WebM product manager, in a blog post Wednesday. A lesser improvement comes with the good quality setting. The new version also works better on ARM chips, particularly multicore ARM chips. That’s important given the growing use of video telephony and the dominance of ARM processors in smartphones and tablets.

VP8, along with the Vorbis audio codec, form Google’s royalty-free, open-source WebM technology. It’s not clear yet exactly how patent-free WebM will be, though; a patent licensing group called MPEG LA is actively soliciting patent holders to come forward if they have patented technology they believe is required to implement WebM.

In the grand scheme of things, the new Bali and Cayuga versions don’t drastically change the fate of VP8, a technology Google is hoping will usher in a royalty-free online video future not possible with today’s dominant but patent-encumbered H.264 codec. But Bali and Cayuga do show that Google is continuing to invest significantly in a technology it clearly deems a high priority for its vision of the Net’s future. Google Chrome 10, released Wednesday, dropped built-in support for H.264 for showing videos built into Web pages with the new HTML5 standard.

Aylesbury focused on faster decoding, and Bali focused on faster encoding. “We will continue to focus on encoder speed in Cayuga,” Luther said in blog post. “There are more speed improvements to be had. As always, we’ll continue to improve video quality in the encoder.”

Faster encoding is important for companies–and for Google’s massive YouTube operation–that are considering encoding Web video with WebM as well as other technology.

US bill seeks to block employer access to social accounts

A United States senator has introduced a bill that will prevent employers from asking their staff or job applicants to disclose their usernames and passwords for access to social networking sites.

According to a report on Frederick News Post, Maryland State Senator Ron Young is seeking to pass the bill to also prevent employers from threatening or taking disciplinary action against employees who refused to reveal their login information.

Young’s move follows previous reports that a state employee, Robert Collins, was asked to divulge his Facebook login details, as part of a recertification process, by his employer, Patuxent Institution–one of three correctional agencies which recruitment is managed by the Maryland Department of Public Safety and Correctional Services. Collins had worked with the prison for three years.

When the issue came to light, the American Civil Liberties Union of Maryland (ACLU), alerted the Maryland department which later agreed to suspend and review the practice.

A representative of the department, Rick Binetti, had said that login information would be requested if job applicants indicated they used social media, but added that refusal to provide passwords ultimately had no bearing on the applicant’s employment.

Binetti, however, refused to explain why the login information was necessary as part of an existing employee’s background check for recertification.

The ACLU said the practice of requesting an employee’s login information is already illegal under the Federal Stored Communications Act and its Maryland counterpart. The Act makes it illegal for any person to access stored electronic communications without valid authorization, said an ACLU spokesperson in the Frederick report.

Young noted that employers have no right to access their staff’s personal pages or e-mail accounts. He added that if social media accounts are used for work purposes, the bill would allow employees to then request for login information.

Maryland state lawmakers will review the bill by Apr. 11, when the general assembly session is scheduled to adjourn.

Akamai: Neverending end to ‘world wide wait’

newsmaker In 1998, Tom Leighton partnered graduate student Daniel Lewin to start a company which vision was to end the “world wide wait”.

The MIT professor, Lewin and other research collaborators banked their solution on mathematical algorithms and a spirit of teamwork, hard work and resolve to never give up.

The company went through some “very difficult times”, particularly when Lewin, who served as co-founder and CTO, was killed onboard American Airlines flight 11 in the Sep. 11 attacks. The dot-com bubble burst proved to be another challenging chapter, forcing the rapidly-growing business to axe headcount.

Akamai has since grown to become a billion-dollar company and one that is gunning to touch US$5 billion in revenue by the close of the decade. Yet the company has remained true to its original vision and is, in a way, “still at the beginning”, said Leighton.

During a recent visit to Singapore, Leighton spoke candidly with ZDNet Asia about Akamai’s growth, and the company’s role in IPv6 evangelism and the Net neutrality debate.

In 1998, you co-founded Akamai with Danny and a few other MIT associates. How did the name Akamai come about?
Leighton: Danny had a friend who worked in PR and so we asked him, “What should we name the company?” He said Hawaiian names are going to be in. So we got a Hawaiian-English dictionary, picked about 20 words and put them on the board. “Akamai” means clever and cool and intelligent–we liked that, it sounded good, so we picked Akamai.

Akamai was founded at a time when the Internet was truly becoming alive. How different is the company today compared to when you first had the vision to start up the business?
Back then, we had no revenue, only a few employees and a big dream. Today we’re a billion-dollar company–we service all the top global e-commerce brands, top media brands and nine of the top 10 banks globally. We carry up to a third of the world’s Web traffic. So the dream now has a lot of reality there.

We’re doing today exactly what we wanted to be doing–the vision has never changed. Our unique approach of putting our servers everywhere, in as many places as we can, and using distributed algorithms to intelligently deliver content and accelerate applications, building a virtual Internet for the benefit of our customers–that vision is the same. But we’re still at the beginning in some sense. We’ve got a billion dollars now, and we’re profitable and growing rapidly, but there’s a lot more that we’re going to be doing against that vision over the next decade.

As chief scientist, you are expected to predict the future of the Internet and work toward solving tomorrow’s problems. What do you see are the challenges five to 10 years down the road, in terms of how Internet content is going to be delivered and consumed?
Security is a huge area of concern–the DoS (denial-of-service) attacks are growing rapidly in number, volume and sophistication. In the last couple of months, we’ve seen probably three dozens of our customers sustain very large-scale attacks. That’s more than we’ve had in the last year.

There’s theft of information on the Internet, corruption of information on the Internet. The bad guys have a lot of capability. It’s always harder to defend than it is to destroy. So security is a major area of investment for us and we’ve got a lot of products that have come out recently, or are coming out this year to deal with that.

Another major area is mobile access to the Internet, particularly in Asia. Of course security is also a big deal here–a lot of the attacks are against sites in Asia and come from bots in Asia. Mobile is exploding right now–we just had the first quarter globally where there were more Internet-enabled mobile devices than PCs sold. There are many countries in this region where the dominant access is mobile and mobile technology will leapfrog landline technology.

That creates challenges because it’s not just one device that people are buying, it’s a multiplicity of devices. Right now we’re in a period where every week or couple of weeks there’s a new device announced by a major manufacturer. And these devices use different operating systems so if you’re watching video,  for example, it’s streamed using different protocols. One device won’t take another operating system or streaming protocol. This creates a challenge for the content owner–in the old model they would just send their show to the satellite and broadcast it to everybody. It didn’t matter what kind of TV you had, you’d get the signal. That’s not true on the Internet.

Today, the person with the content has to encode his movie at different bit rates, in different streaming formats. He’s got to worry about different devices people are watching it on and if it will work on those devices. We’re investing heavily to make those problems go away for the content provider–just give us the movie or the show like you did in the old days, and we’ll make it work on all the different environments out there.

Also, enterprise applications and cloud computing are just at their infancy. As your desktop and corporate applications move into a shared facility of some kind–they are farther away, performance problems will be incurred, security issues get raised. So we’re making large investments to, again, make those problems go away for large enterprise customers.

Our overall goal in all this is to make the Internet be reliable, secure, perform well, be scalable and be easy to use for business. We build our own virtual Internet that lives on top of the actual one, and we use all our own protocols for routing, communication as well as application layer protocols to make things be more efficient. So it’s designed to confront these problems that are already starting to exist and are going to get a lot more challenging over the next decade.

And the security technologies are developed in-house?
We do everything in-house, we do things with partners as well. Take the way DoS attacks work. The typical attacks we’re seeing now are ranging from tens of gigabits a second to hundreds of gigabits a second. If you take the traditional IT approach, you put your Web site and applications in one data center, maybe two or three that are spread geographically. You get a tier-one provider and you buy all sorts of security software. There’s no way that works in the face of these attacks. The typical attack is usually a thousand to 10,000 times normal traffic volume. No matter how good your filters are or how capable your provider is, when all that traffic lands at your data center it swamps your pipe and your filter doesn’t even get to do its job.

The alternative, of course, is that you could spend a fortune–a thousand times the normal cost–but that’s hopeless, you can’t spend that much money. With the Akamai approach, we intercept the communications to the end-users where the end-users are, because our servers are near the end-users. There’s a ton of capacity there. Say, the bot army is from South Korea, well, we’ll intercept all that traffic in South Korea so it won’t ever get to Singapore.

That is the only way that I know of that you can defend yourself against these large-scale attacks. It’s like fighting a war–if you fight the war at home, there’s going to be casualties, it’s not good. If you fight the war in the other guy’s turf, it’s a much better thing to do. With the Internet, all the bandwidth is in the last-mile. It’s just orders and orders of magnitude more capacity, and that’s where we place our servers so that we get scale, we can deliver at high volume and we can intercept attack-traffic right where it starts.

We have a unique capability–we keep the most targeted Web sites up against the most vicious attacks. Part of that is because we’ve got enormous scale, the other part is we have plenty of smart people working very hard for a long time trying to stay ahead.

One topic that’s on many people’s minds at the moment is IPv6. What was Akamai’s own experience in the revamping of its infrastructure to be IPv6-ready?
There’s a variety of roles we play. We strongly support the transition to IPv6. We’re participating in the IPv6 Day, we evangelize about it.

What’s interesting, I think, is that IPv6 is at the front and center of people’s minds in Asia. It is not so in the United States; and in Europe, probably not. IPv6 is very important to the future of the Internet.

We worked with the U.S. government and we drafted the best practices for industry to follow. We’re making ourselves IPv6-compliant as a provider. We’re also providing a service to our customers so that they don’t have to do anything to their Web site and we’ll make it look like it’s v6-compliant.

The transition is going to take a decade, at least. During that time, you’re going to have to deal with both v4 and v6. Today, and for the near future, the v6 Internet is going to perform fairly poorly just because of how it’s hooked up to the v4 Internet which is still the dominant mechanism. We’re in the position to actually make those performance problems go away so that for our customers and their end-users, whether they are v4 or v6, we can make it work for them.

Which is most challenging about making these performance problems go away for your customers?
Right now, in many territories, it is convincing the customers they need to be paying attention. In the United States, they are just not interested. It’s not a fire that’s burning down the house so they think they can worry about it next year. As we think about developing the services that customers are going to buy, right now in the U.S. there’s no demand. In Europe, there’s minimal demand–there’s a couple of customers. Asia, I think, will be the first place where we will have real adoption of that capability because here they care a lot about these things and that’s smart.

Why is there the disparity in attitudes?
It’s possible the crunch comes here first. Asia is growing most rapidly on the Internet…the region’s probably got less IPv4 space to start with. I wouldn’t be surprised that the proliferation of devices and the growth here is much higher or will be much, much higher than the rest of the world.

In this region we’ve been hearing quite a bit about the Net neutrality debate in the United States. Was Akamai ever in danger of being shut down by such regulations?
No.

What are the risks for Akamai if the FCC came out to say that all forms of paid prioritization would be prohibited?
I think what the FCC (Federal Communications Commission) is worried about is that the network is unfair in their access and biased toward its own internal properties. That doesn’t mean you can’t have various levels of service as long as they are available to anyone who wants to pay for them. There was never any risk to Akamai. We worked closely with the FCC to help them understand how things work with the Internet.

What Akamai does is, first, it makes the Internet better for everybody. Everybody benefits when we offload traffic from the backbone, whether you are a customer of ours or not. It’s a good side-effect of Akamai being in the ecosystem.

What Akamai does is get better service for our customers who are the Web sites and the Web applications–we’re happy to sell that to anybody, there’s equality that way. When we work with the networks that are our partners, we help offload their traffic and improve performance to their end-users. It’s a very symbiotic relationship. In the Net neutrality debate, we don’t even take sides. We’re neutral in the discussion because we sort of help both sides. Whatever the government decides to do, we’re going to make it work better.

The Net neutrality discussion is slowly moving over to this part of the world as well. What should policymakers consider when working out such frameworks?
You need to consider the interests of both sides. You want to have the citizens be able to get access to content, you don’t want to have the content blocked off unfairly. At the same time, the networks have to have some way of generating revenue to pay for the deployment. So they’ve got to be able to charge for the access, they have a legitimate interest, too. You need to strike a balance.

In the U.S., some of the networks and cable companies were given monopolies by the government. And when you take that, you have a certain obligation now about how you do your pricing. That’s sort of the hook that’s being used here: Hey, you’ve got this monopoly, we let you go into the neighborhood, we didn’t let anybody else in–you sort of have to give access to let the other content guys in, you can’t use it in an unfair way. But, that said, you’ve got to let the networks monetize it or they’re not going to grow. At the end of the day, you want your population to have access to all the content, and it has to work as an ecosystem so that everybody can make money at the same time.

If you could go back in time and change something in Akamai’s history, what would it be and why?
It wouldn’t have Danny on American Airlines flight 11, that’s probably first.

Well, if we could see the future, we could have timed things better. We were trying to grow very fast during the dot-com bubble days and to get as much growth as possible. But if you knew when the bubble was going to burst, you wouldn’t have hired the last few hundred employees you had to fire when the bubble burst.

I think we are on the track pretty close to what we wanted to have from the beginning. Our vision’s the same. We thought video was going to hit over IP faster than it has–today, video watching over IP is still about 1 or 2 percent–we thought the base would be larger. Basically, things are unfolding in the direction we designed the company for–we’re fortunate that it’s worked out that way.

Indonesia online banking adoption fastest in SEA

The Southeast Asian online banking audience “surged” over the past year with the number of users growing a double-digit percentage, where Indonesia saw the highest growth among six markets surveyed, according to a new report.

In a statement released Friday, comScore reported that visits to online banking sites grew strongly in the past year in Malaysia, Hong Kong, Vietnam, Singapore, Indonesia and the Philippines. Comparing unique visits to online banks in January 2010 with those in January 2011, the research firm found that the number of visits grew at a double-digit percentage for all the six markets.

The survey polled Internet users aged 15 and above who accessed online banking sites from work or home. The numbers did not include access via mobile phones or handhelds and public computers such as Internet cafes.

Indonesia clocked the highest growth, with online banks in the country registering a 72 percent increase in the number of unique visitors from 435,000 in January 2010 to 749,000 this January.

The Philippines ranked second at a 39 percent growth, comScore revealed. Online banking users in the country grew from 377,000 last year to 525,000 in January 2011. Vietnam came in third with a 35 percent growth as users increased from 701,000 to 949,000 over the same period.

Malaysia maintained the most number of online banking users among the six Asian markets. It had 2.4 million users in January 2010, which grew 16 percent to 2.7 million in January 2011.

The comScore report also pointed to Hong Kong as the most highly penetrated online banking market in the region, with online bank users representing 35.5 percent of its total online population. The number of online banking users in the market grew from 1.3 million to about 1.5 million over the past year.

In Singapore, where two-factor authentication for Internet banking systems has been established since 2006, the number of users grew 14 percent from 779,000 users in January 2010 to 889,000 this year.

In the report, Joe Nguyen, comScore’s Southeast Asia vice president , noted that while the online banking penetration rate in this region has yet to reach the level of markets in North America or Europe, such services are growing rapidly as more users turn to the convenience of online channels to conduct banking activities and transactions.

This trend signifies that banks should continue to enhance and develop their site features, as well as improve the overall customer experience to continue to appeal to current and prospective customers, said Nguyen.

Google Profiles lets you say more about yourself

The ongoing saga of Google’s attempt to “get” social media has been full of so many disappointments–and more recently, so much silence–that sometimes it seems like the company has just given up on it entirely.

Whether Google’s still working on a massive operation slated to take a bite out of Facebook’s market share remains the stuff of rumor, but it has been making a subtle move here and there. On Wednesday, the blog Google System noticed that Google’s member profiles have undergone some notable interface changes like click-to-edit functionality. There are new fields to fill out, too: employment and education information, “bragging rights”, and a field at the top of the page that prompts “ten words that describe you best” (not unlike the “write something about yourself” field that Facebook profiles used to have and no longer do).

Considering there have been rumors that Google Profiles would be the backbone of “Google Me”, the working title for the alleged Google social-networking initiative, this is an interesting development, albeit a small one.

Perhaps of more interest to the average Google user is the fact that in the newly overhauled Google Profiles, you can get rid of the Google Buzz tab altogether. Buzz, a Twitter-like service launched to serious fanfare last year, has been a complete flop, and letting users remove it is a sign that it may be on its way out for good.

Zynga, Disney embrace Web game technology

When it comes to the competition between Flash and Web technologies, the latter camp has two big new allies in the online gaming industry: Zynga and Disney.

Zynga today mostly uses Adobe Systems’ Flash technology as a foundation for its widely played CityVille and FarmVille online games. But an acquisition of a German company last fall is paving the way for a new foundation using technology that uses a browser, not a browser plug-in.

Zynga joined the World Wide Web consortium this week and will share the fruits of its Web-based gaming experience, said Paul Bakaus, chief technology officer of Zynga Germany, in a blog post Wednesday. Bakaus is creator of the jQuery UI library of user-interface elements for sophisticated Web pages, and Zynga acquired his company, Dextrose, last year.

And Disney Interactive Media Group, part of Walt Disney, acquired Finnish start-up Rocket Pack, TechCrunch reported. Rocket Pack has been developing another foundation for Web-based games called Rocket Engine.

There’s more, too. Motorola Mobility Ventures announced Thursday that it invested in Moblyng, which develops Web-technology games for mobile devices and social networks.

Those developments aren’t enough to unseat Flash. But they exemplify the increasing attention paid not just to using the technology for Web games but for developing the underlying standards.

Competitively, Flash is a powerful incumbent, and games is one of its strong suits. Many experienced programmers use Flash already, often employing the serious coding tools Adobe sells. And Flash is a moving target: Just this week Adobe released a test version of its “Molehill” technology for hardware-accelerated 3D Flash graphics. Even as Adobe begins embracing Web technologies, for example by contributing to jQuery, it’s also investing heavily in Flash.

Web standards have their advantages, too. Some reach iOS devices where Flash is banned and Android devices where Flash apps can struggle. And a large group of companies is working on bettering those Web standards.

At Dextrose, Bakaus was working on a game foundation called the Aves Engine based on Web technology, not Flash. Now Zynga wants to share its work involving those Web technologies, including the JavaScript programming language and Scalable Vector Graphics (SVG), with others, he said in the post.

Zynga has recently started investing heavily into the open Web stack. While most of our current games (CityVille, FarmVille) still run on Flash, our subsidiary in Germany is exclusively focussing on JavaScript driven game technology. We are building a new-generation engine to power future games that run platform independent and cross-device…

As we’re doing something that (likely) hasn’t been done before, a lot of our time is spent on research. Every day, we encounter new issues with the web stack, and we eventually realized that it doesn’t make sense to keep all of it to ourselves. By joining W3C and actively contributing back and sharing our unique perspective, we hope to kill two birds with one stone: Improving our games, and improving the web for anyone building games.

Facebook, where millions of people play Zynga games, is paying close attention. It’s been working on a Web gaming benchmark and last week released JSGameBench 0.3, a third incarnation of the work in progress. The test measures how fast a browser can show animated “sprites,” graphical elements such as alien spaceships that move around the screen.

Web technologies use a wide variety of standards for browser games. One coming with HTML5 is called canvas for two-dimensional graphics. A canvas drawing area also can accommodate accelerated 3D graphics using another standard, WebGL. The Facebook benchmark engineers found dramatically faster sprite drawing performance using WebGL.

SVG is another important Web technology, and Bakaus now is a member of the W3C’s SVG working group.

SVG is very useful for some types of graphics such as logos and icons, and it’s got an important advantage over bitmapped graphics formats such as JPEG and PNG in that it can gracefully be zoomed to larger or smaller scales. For an illustration, visit an SVG demo site and use Ctrl+ and Ctrl- to zoom the browser in and out.

That SVG zooming is important for the varying screen sizes and pixel densities of smartphones, tablets, PCs, and TVs. Also nice: SVG rendering can be accelerated with graphics chips and, crucially, SVG is built into IE9.

But Bakaus is interested in SVG for another reason: seeing what can be applied to yet another Web technology standardized at the W3C, Cascading Style Sheets. CSS is getting more sophisticated as a way to draw drop shadows or to animate transitions such as moving photos around a screen.

“While we do not use SVG currently mainly due to implementation performance reasons, I’m looking forward to see what knowledge is hidden within the SVG spec than could be ported over,” Bakaus said.

The new Web standards are at times rough around the edges, unstable, and inconsistently supported in browsers. But they’re real, now. Mozilla, on the brink of releasing its first release candidate for Firefox 4, is promoting the new standards on its Web O’ Wonders site, joining other envelope-pushing demos from Apple, Google, and Microsoft.

Programmers have plenty of choices, and it’s unlikely any single technology will win out. The Web technologies, though, clearly are a strong force that’s growing stronger.

Malaysia Airlines takes booking, check-in to Facebook

Malaysia Airlines takes social media to an all-new high by allowing people to book their flights and check-in with the airline through Facebook.

MHbuddy on Malaysia Airlines’ Facebook page lets users book, check-in, manage and share their flights via the social network. The Facebook page also lets users know if one of their friends are travelling on the same flight or going to the same destination.

“With MHbuddy we are pleased to provide our fans on social media an easy way to book a ticket without having to leave Facebook,” said Amin Khan, executive vice president of commercial strategy, Malaysia Airlines.

To gain access to the app, Facebook users have to give Malaysia Airlines permission to access basic and profile information about them and their friends, as well as allow it to post to their wall.

When booking a flight, users fill out a simple form that asks where they are travelling from, where they wish to travel to and the dates of travel. MHbuddy will then list flights that are available, how much they cost and if any of their friends are travelling to or live in the same destination. From there, users choose their preferred flight and fill out the payment details.

If a friend happens to be on the same flight or resides where they are headed, MHbuddy will list them below the departure and arrival list, with a “Share with friends” button beside their name. Clicking on this button will allow friends to post a comment on the wall.

Using MHbuddy to check-in, Facebook users will be able to choose their seat via a seatmap, which will also show where any friends travelling on the same flight are seated. The “Share with your friends” button also appears here.

Regarding user concerns about privacy, air transport IT provider SITA Lab–which developed MHbuddy for Malaysia Airlines–said travellers can choose whether or not they want to share their flight information with friends.

Malaysia Airlines is not the first company to offer this service via Facebook, as US carrier Delta Air Lines has offered a booking service on the social network since August last year.

This article was first posted in ZDNet Australia.

Social media use puts business reputation at risk

SINGAPORE–One of the biggest risks borne from the use of social media at work is that slipups can potentially damage an organization’s reputation, says an industry observer, who recommends that companies create and disseminate clear policies that guide how employees can or cannot behave when using social media in the office.

While the use of social media such as Facebook and Twitter can expose corporate networks to malicious code and attacks, one of the biggest problems businesses need to worry about is the “potential reputational damage” due to the disclosure of confidential business or inappropriate information, said Steve Durbin, vice president of sales and marketing at the Information Security Forum (ISF). The ISF is a non-profit organization that provides research and guidance on information security issues.

In an interview Wednesday with ZDNet Asia, Durbin noted that a virus infection brought about by social media is a technical issue that is easier to fix. In contrast, damage to an organization’s reputation can happen very quickly and may involve legal liabilities, he pointed out.

According to Durbin, there are currently more risks than benefits associated with social media use in an organization. This is because so far neither companies nor individual employees have reached a level of maturity in terms of establishing the do’s and don’ts when using and integrating social media in a work environment, as well as understanding the implications that come with it, he explained.

Social networking sites, observed Durbin, are a relatively new phenomenon. The rise of consumerization and proliferation of devices such as smartphones and tablets–which provide mobile access to social networks–have also made the situation increasingly complex, he said.

People today are multitasking, accessing both personal and work data on corporate systems, such as checking their corporate e-mail and Facebook accounts all at the same time, Durbin added.

In addition, the exponential use of social media has brought about the “avatar effect”, or the merging of the fantasy world and real life world, Durbin said. An employee may forget that he is in a work environment and posts content on his social network that is inappropriate in an enterprise context, he added.

Explicit policies the way to go
A similar case occurred in Singapore last month when a staff member of the Health Promotion Board (HPB) accidentally posted a personal tweet containing an expletive on the government agency’s official Twitter account. The tweet had already gone viral with retweets by other Twitter users, by the time it was taken off the site.

In response to this, Durbin reiterated the need for explicit social media policies to be in place within an organization. For instance, employees should refrain from being logged in to professional and personal Twitter accounts at the same time.

However, he noted that one must “differentiate malicious intent and accidental misuse” and that “people will always make mistakes”. He added that what is more important is how a company “deals with those mistakes in a constructive way, looking at why things happen and putting in place clear measures and policies to prevent that sort of thing happening in the future”.

Quizzed on whether social networking reduces or increases staff productivity, Durbin replied it boils down to the individual.

“There are business benefits if someone, through his social network, managed to resolve an issue much faster than he would have otherwise, just as there are others who spend a significant amount of time using their social networks to plan their next holiday”, he pointed out.

Kill not an option
While he acknowledged that there are potential business and reputation risks of social media at work, Durbin warned against prohibiting social networks in the workplace.

“The issue isn’t social media [itself],” Durbin pointed out, noting that social media is only an “infrastructural layer” that facilitates social interaction. The key is getting some clarity around what is acceptable behavior in a particular organization, he said.

Social elements, he added, have always existed at the workplace and concerns over the impact of social media on employees are the same concerns employers had with workers using their desk phones to converse with their friends.

The difference, however, is that “the medium has changed from the phone to a smartphone, from a chat at the watercooler that stretches to half an hour, to someone sitting at their desk on Facebook,” he said. In addition, the medium is Web-based, which means any information put out can suddenly go viral, as seen in HPB’s case, he noted.

And while people have changed in the way they behave and operate at a societal level, business practices have not evolved accordingly, Durbin said.

Most organizations have corporate policies and procedures in an employee handbook, such as those governing health and safety, and social networking policies are no different, he added.

“It’s about setting the tone for the way you do business and being clear with the people who work with you as to what is acceptable and unacceptable behavior,” said Durbin. That, he noted, is a positive step as opposed to imposing a set of commandments and things that employees cannot do or even totally barring social networks in the office, which is “draconian”.

Google Apps opens up extra storage, for a fee

Google has launched a new service which allows Google Apps users to buy extra storage space for Docs, Picasa Web Albums and photos from Blogger, the company said on Tuesday.

The ability to buy extra storage for Apps accounts is an extension of the User Managed Storage option that personal Google Account holders have been able to take advantage of in order to provide extra capacity.

The User Managed Storage service for Google Apps accounts can be enabled or disabled by the domain administrator, after which the end user can purchase additional storage using their Google Checkout account.

Read more of “Google Apps opens up extra storage, for a fee” at ZDNet UK.

Will Facebook replace company Web sites?

LONDON–A day might be coming when the power of Facebook means that major companies no longer bother with their own Web sites.

That was the startling if self-promotional possibility sketched out by Stephen Haines, commercial director of Facebook’s U.K. operation, while speaking Wednesday at the Technology for Marketing and Advertising conference here. Essentially, Haines argued, companies’ interactions with their customers could take place so often on Facebook that company Web sites would fall by the wayside.

To bolster his argument, Haines showed statistics comparing how many times Facebook users have clicked a company’s “Like” button with how many times per month people visited that company’s Web site. For Starbucks, a top Facebook advertiser, the ratio was 21.1 million likes to 1.8 million site visitors. For Coca-Cola, it’s 20.5 million compared to 270,000; for Oreo, 10.1 million to 290,000; and for Dr. Pepper, it’s 4.1 million to 325,000.

It’s no surprise to hear that Facebook, trying to convert its social networking dominance into corresponding popularity with advertisers, likes a future in which it’s the hub of commercial activity. In a sign that bodes well for that ambition, Haines’ talk drew an overflowing crowd of marketers eager for any tips on how they, too, can capitalize on Facebook usage. In the U.K., millions of Facebook users spend an average of 28 minutes per day at the site, Haines said.

His idea isn’t totally outrageous. After all, plenty of individuals and companies rely on existing online services rather than building everything from scratch. At the individual level, tools such as Google’s Blogger or Yahoo’s Flickr are easier to set up than a custom-built blog or photo-sharing site. Facebook interactions let companies tap into a wealth of customer information and a communication channel, and there’s no need to coax a user to set up yet another username and password.

But the prospect of Facebook becoming powerful enough to make a sort of parallel Web inside its own walled garden also doubtless is fearful in some ways. Sure, the social networking site is embedded increasingly deeply into people’s lives, but relying on it for customer communications means subordinating a key part of a businesses’ operations to a middleman that has shown no shortage of ambition. Many companies are happy to use Microsoft products and Google services, but there’s companies and antitrust regulators get antsy when too much power is concentrated in one corporation’s hands.

There’s also the possibility that Facebook users, not just companies, might get cold feet. Thus far the site has continued to attract members despite controversies with privacy and other matters, but it’s possible the company might go to far.

It’s music to marketers ears to hear Haines say Facebook’s targeting tools can tell companies exactly who are the “22-year-olds in Surrey who like football and cricket”. But Facebook users might not find it so melodic when finding out that companies, not just their friends, have a keen interest in what favorites they list on their profile.

Even if Facebook doesn’t somehow supplant lots of Web sites, though, there’s no denying Facebook is becoming more important to marketing. The company is adapting to the idea.

The company has a variety of tools available to marketers:

  • Ways to offer free samples to customers, something ketchup maker Heinz has used.
  • The ability to attract attention of smartphone users making local check-ins. Clothing retailer The Gap gave away 10,000 pairs of jeans to the first 10,000 customers to use the Facebook local check-in service, and Mazda sold 100 cars–exceeding expectations–with a 20-percent-off offer at five UK auto dealerships, Haines said.
  • E-commerce sites can be built into Facebook pages. Max Factor didn’t want to lose visitors to its Facebook page to another site when customers were ready to buy something, so a partnership with Amazon lets them buy products without leaving.
  • “Reach block” ads that change as many as five times as a 24-hour period progresses to send a sequence of ad messages to Facebook users.
  • Surveys let companies try to engage customers in company decisions. Vitaminwater used voting among other mechanisms to generate 1.3 million “connections” with possible customers during its “find a new flavor” marketing campaign.
  • Applications built atop Facebook’s interface let companies create custom-made interactive programs.

On top of that, Facebook is experimenting with new ideas. One is “newsfeed story ads”, in which commentary that ordinarily would appear as updates in a Facebook user’s news feed appear in advertisements, too. Another is “application social context ads”, in which an app can show a user which of his or her contacts also is using it.

Regardless of the extent to which Facebook actually replaces in-house Web sites, Facebook as a marketing channel isn’t for everybody. Haines had plenty of examples of companies who’ve benefited from it, naturally, but he also had cautions for those thinking of using it seriously.

First, be prepared for a long-term commitment to keep a site on Facebook lively.

“If it doesn’t change, it’s probably not worth dabbling” with a Facebook site, Haines said. For a social-networking site to be useful in marketing, it’s got to “stimulate” the customers, he said.

Second, plan to respond to very public criticism.

“If you ignore it, it’s the worst thing you can do,” he said. “Be prepared for it, because it will happen.”

Ideally, good responses can turn critics into fans, though.

Third, companies should be judicious about the fine line between engaging customers and annoying them. One company, whom Haines didn’t name, had 200,000 people liking its page.

“They sent sent seven messages a day,” he said. “Their fan base dropped off.”

Internet Explorer gains, Firefox wanes in Feb

Microsoft’s Internet Explorer saw a boost in market share during February, while Mozilla’s Firefox dropped, according to new numbers released by the Central Intelligence Agency and analytics firm Netmarketshare.

The shift can be attributed to a re-balancing of Internet users by location, which gave China a healthy boost, while taking away market share percentage from countries that had originally weighed in heavier, such as the United States and part of Western Europe.

“In February, the C.I.A. released new data on how many Internet users per country there are. It shows a large increase in the global percentage of Chinese users and a decrease in the global percentage of users from the U.S., U.K, Germany, France and other developed countries,” the report said. “These geographic shifts in Internet usage have an significant impact on the global usage share numbers starting in February.”

For IE, that impact amounts to 63.26 percent on Windows machines, up from 62.40 percent the previous month. Meanwhile, IE8 jumped up 1.03 percent, with IE9 gaining a modest .10 percent, and the IE9 beta topping 2.09 percent of Windows 7 machines. As for Firefox, the browser dropped by a little more than 1 percent to fall at 21.74 percent of total Internet use.

Alongside the numbers, Microsoft announced that the release candidate of IE9 had been downloaded 11 million times since its release in early February. A Microsoft representative told ZDNet Asia sister site CNET that number includes upgrades from the beta and Web downloads. Combined with previous download numbers of the IE9 beta, the total tally is 36 million, which Microsoft says tops combined downloads of the IE8 beta, and its release candidate.

Netmarketshare said that country-level reporting has been unaffected by the change, and that the adjustment will correct inaccuracies with its reports. The company also reported that the new CIA numbers had impacted Mac and iOS Internet use reporting, causing slight dips despite neither platform losing users.

Bing deepens ‘liked results’ Facebook integration

Microsoft says the feature that highlights Facebook activity around some of Bing’s search results has been “extended” to include any and all URLs.

The company announced the expansion in a blog post, saying that this integration was just “part of a longer journey”, and that it played a complimentary role to the company’s efforts in adding a social layer to is results, as it did with Twitter.

“This is the first time in human history that people are leaving social traces that machines can read and learn from, and present enhanced online experiences based on those traces,” Lawrence Kim, the principal program manager for social search, said in a post on the Bing Team blog. “As people spend more time online and integrate their offline and online worlds, they will want their friends’ social activity and their social data to help them in making better decisions.”

Microsoft had originally unveiled the social features back in October of last year at a press conference with Bing execs and Facebook founder and CEO Mark Zuckerberg. The company rolled them out to U.S. users three weeks later.

Last week, Google unveiled a similarly social feature it’s applying to search results which takes advantage of data from Facebook, Twitter and other social networks to display links that have been shared by other users.

The feature remains exclusive to U.S. users of Bing, the company said.

Zynga, MOL unveil game cards in Asia

SINGAPORE–Zynga and MOL have announced the launch of Zynga game cards in Asia, allowing users to pay for virtual goods used in Zynga games.

Available in values of US$2, US$5 and US$10, the cards will be sold in retail chains in Singapore such as 7-Eleven and Comics Connection, revealed company executives during the launch here Tuesday. They will also be available in Malaysia, India, Indonesia, the Philippines and Thailand.

Current customers of MOL, which is a Malaysian online payments company, can transfer their existing MOLPoints virtual currency to the Zynga game cards. The social games developer said the cards will provide more options for consumers in Asian markets.

According to an ABI Research study, global online game sales–propelled by Asia–will hit over US$20 billion in 2012.

More details to follow…

Gmail to fix erased e-mail messages today

Google is set to restore thousands of Gmail accounts affected by a glitch that erased email, settings and contacts by today.

The mysterious glitch hit Gmail yesterday, affecting some 0.02 per cent of Gmail users. BBC reports Gmail has some 170 million users.

Affected users flocked to Google’s forums to research the problem, which has reportedly also affected Google Apps users.

Google said it will fix the unidentified problem by about 6pm AEDST (3pm SGT)  today.

“Google Mail service has already been restored for some users … the remaining 0.012 per cent of accounts are being restored on an ongoing basis,” Google wrote in its latest update to its service dashboard.

The company had revised the estimated number of affected users from 0.08 per cent to 0.02 per cent as of 6.46am this morning.

This article was first published at ZDNet Australia.

Reports: J.P. Morgan Chase in talks for Twitter stake

J.P. Morgan Chase is in talks to take a minority stake in Twitter that would value the microblogging site at $4.5 billion, according to published reports.

The investment firm’s new US$1.2 billion Digital Growth Fund is leading the effort, however exact terms of a possible investment are unknown, according to reports in The Wall Street Journal and The New York Times that cited people familiar with the matter. The sources cautioned that talks are continuing and there is no guarantee an agreement will be reached.

Twitter representatives declined to comment.

The fund, which is expected to focus on private Internet and digital-media companies, is also said to be interested in investing in online gaming giant Zynga and group coupons site LivingSocial, according to the reports.

Twitter completed a US$200 million funding round in December–led by investment firm Kleiner Perkins Caulfield & Byers–that gave the company at a US$3.7 billion valuation. The funding was expected to be use to ramp up engineering resources to support a growing user base and allow early employees to cash out some company stock.

The talks come on the heels of Facebook’s US$500 million funding round through deals with investor Goldman Sachs and Russian investment firm Digital Sky Technologies that gave the social-networking giant a valuation of US$50 billion.

Twitter, which limits its users to 140-character broadcast messages, has been credited with playing a pivotal role in the civil unrest that led to the resignation of Egyptian President Hosni Mubarak.

Facebook plans to resume address, phone sharing

Despite U.S. congressional criticism, Facebook is planning to resume the aborted rollout of a feature that allowed the optional sharing of addresses and mobile phone numbers.

Facebook said in a letter (PDF) released today that it is evaluating different ways to “enhance user control” over information sharing that would go into effect “once the feature is re-enabled.”

The social-networking site encountered some criticism in January after announcing the feature, which allowed applications to request permission to access user information. Only if the user clicked “Allow” was information shared.

Only three days after announcing the platform update, Facebook voluntarily delayed it, with Douglas Purdy writing that “we are making changes to help ensure you only share this information when you intend to do so.”

Reps. Ed Markey (D-Mass.) and Joe Barton (R-Texas), who have a history of assailing tech companies including Apple and Google over perceived data transfer snafus, suggested in a letter (PDF) on February 2 that the pop-up permissions window was insufficient “given the sensitivity of personal addresses and mobile phone numbers compared to other information users provide Facebook.”

Facebook’s response, prepared by Marne Levine, vice president for global public policy, stressed that applications that run on the Facebook platform have long had the ability to ask for information. For example, Levine wrote, “a photo-printing application that prints photos for a user requests permission specifically to access a user’s photo; a social-gaming application that allows users to play a game with his or her friends requests permission to access the user’ friends list.”

In last month’s announcement that dealt with contact information, Levine wrote, “we allowed applications to ask users for that information, through a permissions screen…that provided clear and conspicuous notice to the user regarding what information the application is seeking.”

And in response to the politicians’ point about minors, Levine said that anyone under 13 is prohibited from using Facebook, and the company is “actively considering” whether to allow applications to request information from even older minors.

Markey said in a statement today that he’s not satisfied with Facebook’s response.

“I don’t believe that applications on Facebook should get this information from teens, and I encourage Facebook to wall off access to teen’s contact information if they enable this new feature,” Markey said. “Facebook has indicated that the feature is still a work in progress, and I will continue to monitor the situation closely to ensure that sensitive personal user data, especially those belonging to children and teenagers, are protected.”

Separately, Facebook announced last week that it’s asking for comments on a proposed revamp of its privacy policy that’s meant to make it easier to understand.

If it’s on the Internet, does that make it quotable?

commentary The news headlines in Grand Rapids, Mich., were dominated this week by local sports, a debate over wage raises for workers who receive tips, and a man who pleaded guilty to a misdemeanor for encouraging his dog to kill a raccoon. But, there was also some newsroom controversy within the local Grand Rapids Press newspaper: Just how quotable is Twitter?

An entertainment reporter, Rachael Recker, wrote an article in which she quoted several Twitter users and identified them by username. For those of us accustomed to the tech press, this is no surprise–but for a local newspaper, it’s unorthodox territory. Recker was met with criticism from readers as well as some of the Twitter users quoted, who, according to a column published earlier this week in the Grand Rapids Press, thought that the journalist “wasn’t doing a full job of reporting, since she didn’t contact them personally for a quote…(and) questioned whether it is appropriate to use tweets in an online story without specifically asking for permission.”

This minicontroversy–if something is public on the Web, does that make it automatically open for quoting within the realm of copyright restrictions?–is not restricted to Twitter. One-to-one e-mails should be private. So should instant messages. Beyond that, it gets messy.

Question-and-answer site Quora has a little-known policy in which users can flag their answers, many of which are extensive and detailed, as “not for reproduction.” Facebook, encouraging an ever-growing amount of public content, says in its terms of service that users grant it “a nonexclusive, transferable, sublicensable, royalty-free, worldwide license” to what they say and upload on the social network, but is very fuzzy on details when it comes to quoting and reproducing that content outside of Facebook. The Web has not just flooded the world with print content, it’s flooded it with new kinds of content–public and semipublic e-mail lists, Facebook groups, blog comments, answers on question-and-answer sites–and the old rules of quotability don’t always fit.

But here’s somewhere to start: If something is public, it’s quotable. If you don’t want to be quoted, don’t say it on the Internet. If you have a public Twitter account and say something, then, yes, it’s public. Should Twitter users expect to be contacted and asked for permission to have their tweets reprinted? Don’t count on it.

It can get a little more complicated, of course. Quora requires users to be logged in before they can browse anything, which means that content isn’t completely public, and on Facebook it’s hard to tell what isn’t hidden behind any kind of contacts-only or co-workers-only restriction unless you log out and reload the page to see if it’s still visible. So in both of those cases, quoting is significantly more ambiguous than on Twitter.

Then there’s the Web’s panoply of semipublic services, forums and groups and e-mail lists galore. I ran into this headlong when, in a story earlier this year, I quoted a user’s post from NextNY, an e-mail list that I’ve been on since 2006 and to my recollection had joined without any trouble; the list openly solicits membership on its Web site, lists no reprinting policy, and has more than 3,000 members. I contacted that user to ask permission but didn’t hear back, and with a deadline impending decided to just run with it. The user ultimately got back to me and asked if I might disassociate his name from the quotation. Digging a little deeper, I learned that the NextNY archives are not indexed in search engines, and while membership is openly solicited, I verified with an administrator that new users do have to be approved by a moderator. That was “nonpublic” enough for me. I ran a correction.

That was an instance in which the situation was ambiguous, but I now have a new rule of thumb for dealing with e-mail list and forum quoting in the future–and I now encourage my colleagues and acquaintances who administer e-mail lists to come up with policies for republishing content if they don’t have them already. Journalists aren’t the only ones who publish on the Web; a line from a semipublic e-mail list could easily be reproduced and disseminated by anyone with a Twitter account or a blog.

But in the case of the Grand Rapids Press incident, I side with Rachael Recker–as does her employer. “For the record, we consider tweets fair game for publication unless they appear in a direct message. Same goes for Facebook posts that are accessible to public viewing,” the newspaper’s column explained. “Almost everyone on Twitter retweets interesting comments, and no one asks permission to do so. You’re sharing that person’s observations with potentially thousands of people, depending if it is retweeted yet again.”

(Or, potentially millions, if Ashton Kutcher comes across your tweet and decides he finds it brilliant.)

The Web is forcing us all to redefine what’s public and what’s private in many instances. We often don’t know who might be listening, and we can never be sure who might be capable of broadcasting that information to the masses. In 5 or 10 years, we may have well-known rules and guidelines for dealing with quoting and republishing digital media–but not yet. GigaOM writer Mathew Ingram may have summed it up best in a tweet, which I am now quoting (how meta!) in which he riffed on the Quora “not for reproduction” policy.

“Here’s a tip for anyone using the ‘not for reproduction’ thing on Quora,” Ingram wrote. “Don’t want your answer quoted? Don’t put it on the Internet.”

Google to content farms: It’s war

Google has set in motion the changes that it announced recently to combat “content farms”–companies that produce large amounts of inexpensive, search-engine-optimized content that have been frequently decried for their low quality.

But will there be sweeping changes in the way we view and navigate the Web? It’s hard to tell just yet.

“In the last day or so we launched a pretty big algorithmic improvement to our ranking–a change that noticeably impacts 11.8 percent of our queries–and we wanted to let people know what’s going on,” Google said in a blog post last week. “This update is designed to reduce rankings for low-quality sites–sites which are low-value add for users, copy content from other websites or sites that are just not very useful. At the same time, it will provide better rankings for high-quality sites–sites with original content and information such as research, in-depth reports, thoughtful analysis and so on.”

Part of this strategy involves a Chrome browser extension called Personal Blocklist.

But Demand Media, the recent IPO at the forefront of the “content farm” controversy, said last week that it’s been unaffected by Google’s algorithm change, so far. “It’s impossible to speculate how these or any changes made by Google impact any online business in the long term–but at this point in time, we haven’t seen a material net impact on our Content and Media business,” Demand Media executive vice president Larry Fitzgibbons said last night in a blog post. Demand Media, nevertheless, leaves open the possibility that its content could be affected in the future.

Indeed, Google said the changes may not be visible immediately, especially as the modifications to its algorithm are currently affecting only U.S. users. “We’re very excited about this new ranking improvement because we believe it’s a big step in the right direction of helping people find ever higher quality in our results,” the Google blog post explained. “We’ve been tackling these issues for more than a year, and working on this specific change for the past few months. And we’re working on many more updates that we believe will substantially improve the quality of the pages in our results.”

What’s at stake for Google here is the fact that critics have said content farms are making search results less useful and less relevant. With pressure from the “social search” trend fueled by Facebook’s success and from search rival Bing inching up in market share, this decision may be more pressing for Google than it appears at first glance.

Facebook beefs up Like button

Is Facebook getting ready to show its Share button the door?

The social-networking giant recently released an update that adds Share button functionality to the Like button, perhaps presaging the phasing out of the Share button. When a Facebook user clicks the Like or Recommend button on a third-party site, a full feed story with headline, blurb, and thumbnail is generated on the user’s wall. Users will also have the option of commenting on it.

Previously, unless third-party publishers chose the Like with Comment version of the button for their site, users got only a link to the story in their recent activity section on their wall. Now the Like, Share, and Recommend buttons will all generate the full story with headline, blurb, and thumbnail.

The change should drive more referral traffic to third-party sites and perhaps reduce user confusion over how the buttons work. But because the content will now be more prominent on user’s walls, some may be more reluctant to click the Like button.

Facebook is apparently no longer supporting development of the Share button, having removed it from the developers documentation section of the site, and a search for Facebook Share in the developers section redirects to the Like button documentation page.

Facebook representatives did not immediately respond to a request for comment.

Google probing lost Gmail messages, contacts

Gmail users complained of suddenly and mysteriously having lost old e-mail, folders, and contacts, and Google said it was investigating the issue but that it did not appear to be widespread.

At 12:09 p.m. PT, Google said on its Apps status dashboard that it was aware of the issue was investigating. At 5:02 p.m., Google said it was “continuing to investigate this issue. Google engineers are working to restore full access. Affected users may be temporarily unable to sign in while we repair their accounts.” Less than 0.08 percent of the Gmail user base is affected, Google said.

The issue came to Google’s attention when users started lighting up the company’s support forums with complaints of lost e-mails. “I have lost ALL on my emails/folders etc. from gmail. Why would this happen? How can I restore everything?” wrote user bkishan wrote in the forum.

“This morning when I woke up I only saw two mails in my gmal box that were sent last night. All mail was gone,” user Wienke wrote to the forum. “I also got some notifications which you will get when you have a new account. Seems somehting must have been reset.”

Google representatives did not immediately respond to a request for comment.

Facebook tests souped-up privacy policy

Facebook announced last week that it’s seeking user comment on a proposed redesign of its privacy policy that’s meant to make the policy easier to understand while bringing the world of legalese-smothered documents into the widget-filled realm of the 21st century.

In a post to Facebook’s site governance section, the company’s privacy team offers a look at its “first attempt” to re-organize, rewrite, and add interactivity to the current policy, which is essentially your standard mass of small black text.

Among other potentially interesting re-imaginings, the proposed redesign features an interactive tool intended to demonstrate how profile data is put to use in serving advertisements (click “Personalized ads” and scroll down to “Try this tool”). The tool puts Facebook members into the shoes of someone creating and targeting an ad. It’s not clear if users would deem it an educational aid or a nuisance in practice, but that seems to be part of why the potential redesign is being put to public scrutiny in this way.

The privacy team says the rough redesign is “outside of even our regular process of notice and comment”, and it continues:

“Because we’re tackling a challenge that matters to so many people–and doing it in a way that is so different from what we’ve done before–we’re giving you a look even earlier in the process. If people like what we have, we’ll put it through our regular notice and comment process at a later date.”

The team also makes it clear that the effort is meant to involve the reorganization and presentation of the privacy policy, not any significant changes to its actual content. “We’ve tried not to change the substance of the policy but, in our effort to simplify, we have added some new things that were elsewhere on the site (like our help center) and have made some other concepts clearer,” it says.

Facebook, of course, has been battered by high-profile complaints from privacy advocates, including a U.S. senator or two. Last year, the company, which hosts the private data of many millions of members around the globe, instituted major changes to user privacy controls in response to such concerns.

Still, the company has given some indication that it could continue its “shoot first, ask questions later” approach to privacy-related site changes. It launched a tweak this past January that potentially made users’ addresses and phone numbers available to app developers. That change was hastily reconsidered after it touched off yet another kerfuffle about the company’s practices.

In its post about the redesign, the privacy team speaks proudly of Facebook’s “unconventional, innovative spirit.” True, the aforementioned tool for explaining ads could conceivably break new ground in the staid world of “reading the fine print.” (Heck, if you’re gonna go interactive, why not get Zynga involved–“MarketingVille” anyone?) But the truly visionary move here might just turn out to be the outreach effort itself. Making an extra effort to solicit comment before instituting a privacy-related change? For Facebook, that could be the real innovation.

Bing deepens ‘liked results’ Facebook integration

Microsoft says the feature that highlights Facebook activity around some of Bing’s search results has been “extended” to include any and all URLs.

The company announced the expansion in a blog post, saying that this integration was just “part of a longer journey”, and that it played a complimentary role to the company’s efforts in adding a social layer to is results, as it did with Twitter.

“This is the first time in human history that people are leaving social traces that machines can read and learn from, and present enhanced online experiences based on those traces,” Lawrence Kim, the principal program manager for social search, said in a post on the Bing Team blog. “As people spend more time online and integrate their offline and online worlds, they will want their friends’ social activity and their social data to help them in making better decisions.”

Microsoft had originally unveiled the social features back in October of last year at a press conference with Bing execs and Facebook founder and CEO Mark Zuckerberg. The company rolled them out to U.S. users three weeks later.

Last week, Google unveiled a similarly social feature it’s applying to search results which takes advantage of data from Facebook, Twitter and other social networks to display links that have been shared by other users.

The feature remains exclusive to U.S. users of Bing, the company said.

Aust voluntary Net filter to start mid-2011

The Department of Broadband, Communications and the Digital Economy (DBCDE) under Stephen Conroy’s charge has said voluntary filtering of the Internet by three of Australia’s largest Internet service providers (ISPs) is on track to kick off in the middle of this year.

In July last year, Telstra, Optus and Primus revealed an agreement with the Australian Federal Labor Government to voluntarily implement filtering technology to block any of their customers from accessing child pornography online, while the government conducted a review into the Refused Classification category of content, which its much broader mandatory filtering project would block.

On Tuesday night, DBCDE deputy secretary for its Digital Economy & Service Group, Abdul Rizvi, told a Senate Estimates committee hearing that the voluntary filtering was slated to kick off in mid-2011. The Australian Communications and Media Authority, he said, is currently developing a subset of Internet addresses that would only include child abuse content, and also trialing a “secure method” of transmitting that list to ISPs in the near future.

This article was first published at ZDNet Australia.

Libya’s Internet hit with severe disruptions

Libya’s Internet links have been severely disrupted as chaos spreads across the country, with a defiant Col. Moammar Gadhafi today vowing to die a “martyr” rather than relinquish his grip on power.

As reports describe portions of Libya as a “war zone”, and the country’s deputy U.N. ambassador is saying “genocide” is under way, inbound and outbound Internet traffic has plummeted to a fraction of what’s normal. Over the weekend, traffic appeared to be following a “curfew” pattern, with more restrictions imposed in the evenings, and YouTube is now almost entirely unreachable while Facebook is blocked.

Craig Labovitz, the chief scientist of Arbor Networks, said that as of Tuesday, Libya is experiencing a significant Internet outage with traffic volumes 60 percent to 80 percent below normal levels.

That follows a complete outage on Friday night, with the country vanishing from the Internet as completely as Egypt did during its revolts a few weeks earlier. Partial service was restored Saturday morning, only to be cut off again at around 2 p.m. PT, or midnight local time.

Jim Cowie, co-founder and chief technology officer of Internet intelligence firm Renesys, says it’s not clear whether the disruptions are intentional or caused by other factors such as power outages. (A report by a CNN correspondent in eastern Libya said the power was up but the Internet was down.)

“The outages have lasted hours, and then service has resumed,” Cowie said. “All of that is consistent with alternative explanations, such as power problems or some other kind of single-operator engineering issue.”

Egypt’s Internet disruptions were easier to identify because of the larger number of broadband providers, almost all of which went dark simultaneously. In Libya, however, there appears to be only one with connections to the rest of the world: Libya Telecom and Technology, which is state-owned and enjoys close ties to Gadhafi.

Egyptian networks “were withdrawn within the same 20-minute window–hundreds and hundreds of networks were affected, and stayed down for days,” Cowie said. “Traffic flowing through Egypt to other destinations in the Middle East was utterly unaffected. All of that gave a fairly unambiguous signal from the start that it was a political event.”

Graphs from Google’s Transparency Report and Akamai Technologies reflect the hiccups in Libya. The data shows daily rises and dips in normal Internet traffic from Libya, followed by a stuttering, interrupted flow after Friday. Traffic appeared to be rising on Tuesday, but at far lower levels than a week before.

YouTube’s traffic volume, according to the Transparency Report, is down as much as 90 percent from normal levels.

Instant messaging and Web browsing traffic has dropped more quickly than other types, according to Arbor’s Labovitz.

While Libya’s government has engaged in relatively modest Internet selective filtering in the past, the list of off-limits Web sites has grown in the last week.

The AFP news service reported on Friday that Facebook was blocked, and Al Jazeera’s Web site is now off-limits. Al Jazeera said Tuesday that it’s “suffering interference on its Arabsat satellite frequency” as well.

That follows Gadhafi’s recent warning to Libyans not to use Facebook, where activists have created groups calling for reform.

Bit.ly CEO John Borthwick wrote on Quora that Internet blocking in Libya “will not affect” his company’s site because some of the root nameservers for the .ly top-level domain are located in Oregon and the Netherlands. Page.ly’s Joshua Strebel offered similar assurances.

On the other hand, if for some reason Libya’s government decided to target foreign .ly Web sites–which of course wouldn’t make much economic sense–it could require that those domains be removed from the master in-country registry. They would then begin to disappear from the Internet over the next few weeks.

It’s time for Google Docs to work offline

Google is betting on a future with ubiquitous, affordable, wireless, high-speed Internet access. That may be smart in the long run, but this week that philosophy drove me straight back into the arms of Microsoft.

My technology choices generally come down to pragmatic rather than religious choices, and it was pragmatism that led me to embrace Google Docs last year. I like the fact that I can work simultaneously on multiple computers–indeed, even on mobile phones these days–and that multiple people can easily collaborate. My requirements for advanced formatting and formulas are low enough that I generally can put up with the shortcomings.

Here’s what I don’t like, though: For Google Docs, you need a network connection.

I just spent five days at the Mobile World Congress show in Barcelona. Contrary to what one might hope for a show devoted to the latest in mobile communications, the wireless networking at the show generally ranged somewhere from crippled to crushed.

For reasons that baffle me, network giant Cisco sponsored the show’s Wi-Fi, with signage in the halls touting it and attendees receiving a flier explaining how to use it. I’d have thought that Cisco, a company with a brand to promote and protect, would have learned by now to steer clear of tech trade shows in which auditoriums filled with Net-enabled gadgets bring wireless networks to their knees.

I eventually hobbled by with a Vodafone 3G dongle plugged into my computer’s USB port, but that only works some of the time (it was too bulky to use the dongle and the other USB port at the same time, for example). And of course the data plan is expensive, I had to unplug it much of the time, and connecting to the network is slow.

Under these circumstances, was I going to rely on a word processor that needed a network connection? Not a chance.

Thus, it was back to Microsoft Word for me during the show.

I recognize that these trade show circumstances might be a little extreme when it comes to network failings, but there have been plenty of times driving around my previous home in California and my present one in England in which the network doesn’t work for me. Taking the train into London, a classic commuter scenario if there ever was one, is one example.

Google had tried to enable offline Google Docs in years past using its now-discontinued Gears plug-in. That didn’t work for me for a number of reasons: First, I use a Mac when traveling, and Gears broke with the release of Mac OS X 10.6, aka Snow Leopard. Second–and maybe this was some kind of user error–I just found it awkward.

I wasn’t alone. The relatively low usage of the feature probably minimized the pain when Google announced last year it was temporarily ditching the offline feature in a Google Docs overhaul that I otherwise like for new abilities.

“We need to temporarily remove offline support for Docs starting May 3rd, 2010. We know that this is an important feature for some of you, and we are working hard to bring a new and improved HTML5-based offline option back to Google Docs,” said product manager Anil Sabharwal in a blog post at the time.

How long will we have to wait? In December, Google promised that offline Google Docs will return “early in 2011”. An eighth of the way into the new year, I’m looking at my clock, and Google isn’t commenting on any particulars at this stage.

What’s the holdup? First, I suspect, is browser support for a new standard called Indexed DB, aka Indexed Database. A general consensus backing IndexedDB only emerged a year ago, and browser support is only arriving now..

Aside from the browser issues, Google has some re-engineering to do as well. The earlier offline technique used a different offline storage technique in Gears very similar to a browser technology called Web SQL Database. But facing Mozilla and Microsoft opposition, Web SQL lost out to IndexedDB.

In a perfect world, offline Google Docs would be an invisible, unnoticeable step away from online Docs. That means first and foremost that I’d be able to edit a document without an Internet connection, of course, with changes being synced with the online incarnation once a Net connection was re-established. But it would mean more than that. I also should be able to create new documents, search my archive, and perform file-management tasks such as adding a document to a collection.

Those features are among the most basic actions one takes for granted in the Microsoft Office world. Although Google Docs shows promise, without those features, it’s profoundly broken until that perfect network arrives.

Mobile operators face increasing Facebook threat

Mobile operators are underestimating Facebook as a formidable competitor which recent efforts have seen the social networking site expand its presence into voice communications, location-based services and mobile advertising.

According to an Ovum report released Tuesday, since making its first move into the mobile platform in 2006, Facebook is now “a force to be reckoned with” where more than 200 million users today interact with the social network via their mobile phones.

Eden Zoller, principal analyst with the research firm, wrote in the report that the Internet giant is beefing up efforts to be a platform from which users communicate as well as consume and share information–regardless of where they are, and which device they use.

He noted that Facebook made several moves that placed the company in competition with mobile operators, including its integration deal with Skype for voice communications, and the launch of its e-mail service in November 2010.

It also unveiled a location-based service via Places and is currently looking at mobile advertising via the Facebook Deals check-in service.

“Facebook is encroaching directly on mobile operator territory and should not be underestimated,” Zoller cautioned. “However, operators are being slow to wake up to the extent of Facebook’s ambitions and tend to view it as benign, non-competitive presence that they are keen to form partnerships.”

The Ovum analyst also pointed to speculation that the Internet company had plans to release its own phone, which he said could serve as “the final piece of the puzzle”. However, Facebook had refuted such claims.

Zoller added that even though Facebook is unlikely to unveil its own mobile phone, it could be keen to work with partners to develop a customized device platform. “This would in effect make Facebook a social operating system,” he said.

The analyst noted that mobile operators will be interested in establishing alliances with the social networking site, for example, by offering easier access to its service and enabling address book integration.

Zoller said: “While there are good reasons why operators should wish to partner with Facebook, they should be more alert to the fact that it is shaping up to be a strong competitor. It is only by understanding Facebook fully that operators can engage with it effectively, be that on a collaborative or competitive basis.”

British startup INQ Mobile earlier this month announced plans to release a new Android-powered phone with tight integration with Facebook, which include features such as Facebook-related buttons on the homescreen and Facebook friends integration with contacts.

There are currently over 500 million active Facebook users globally, 70 percent of whom are based outside the United States.

Facebook users that access the site via their mobile devices are twice as active as those who do so via non-mobile platforms. Over 200 mobile operators in 60 countries deploy and promote Facebook mobile products.

US prison demands Facebook login details from staff

In a wonderful example of how privacy rights can be casually ignored, U.S. jailkeepers at the Maryland Division of Correction (DOC) are requiring all new members of staff, as well as those recertifying, to provide full access to their Facebook accounts for use in background checks.

The new regulations came to light with the case of Robert Collins, who was undergoing recertification last year for a position following a 4-month leave of absence, Slashdot reports. Collins, who’s now suing his employers with the help of the American Civil Liberties Union, was informed that he was required to provide full access to his Facebook account as part of the interview process and was then made to wait while the interviewer logged into his account and brazenly browsed his profile.

The reason given for this blatant invasion of privacy was to enable the government to examine Collins’ wall posts, e-mails, photos and friend lists to ensure that new employees within the facility were not engaged in illegal activity or affiliated with known criminals–particularly gang members.

Read more of “US prison demands Facebook login details from staff” at CNET UK.

Future of search in personalization

The future of search in the new decade will see continued focus on deeper social context and speedier generation of relevant results, alongside a growing presence on mobile devices and personalized search, say industry observers and players.

Andrew McGlinchey, head of product management for Google Southeast Asia, told ZDNet Asia that search will continue to be “faster and more personalized” in 2011. Elaborating on the search giant’s efforts, he said Google is beefing up efforts in Google Instant and focusing on innovating its search engine in four directions: personalization, localization, social context and relevancy.

In an e-mail interview, McGlinchey explained that by taking into account factors such as a user’s geographical location or social connections, search results will be precise and specifically tailored to the individual user. This also means there is increased relevancy since the right information is obtained in the quickest possible time, he said.

“Search is becoming more dynamic and less static, and search engines of the future will be better in part because they will understand more about the individual user,” stated the Google executive, who noted that users would still have control over their personal information and the search engine’s use of personal data will be transparent and only when permitted by the user.

In a blog post this week, Google announced that when a person searches via its engine, social search results will be displayed more prominently and be listed among search results based on relevance. In the past, past, such results only appeared at the bottom.

Search today runs on a combination of social, real-time and contextual, according to Yvonne Chang, vice president and managing director of Yahoo Southeast Asia.

Yahoo, which search is powered by Microsoft’s Bing, is focused on innovating the user’s overall search experience, Chang told ZDNet Asia in an e-mail.

In addition, the Internet company will also be investing in contextual search to help people find more relevant content by “getting in front of users at the right place and at the right time, and presenting them with experiences tailored to their ideas”, she said.

Contextual search analyzes the page that is being read and provides a list of related search results, she explained. Also, while a user is reading a Webpage, contextual search will present additional related info, encouraging users to chase those links, she added.

Out of the desktop
According to Chang, the search landscape is getting ready to resolve gaps in user experience as the adoption of new devices grows. Search queries are going beyond the search box, she said, pointing to Sketch-a-Search, Yahoo’s mobile search app which lets users “draw” a circle around an area on a map to narrow search results specific to that location.

McGlinchey agreed that search engines are taking on new forms since the early days when “it was all text”. For instance, he said online users can now conduct search queries using photos via Google Goggles, and with their own voices via Voice Search.

Both features, introduced over the last two years, already indicate that mobile apps are a long-running trend, if not the new frontier for search, said Adam Bunn, director of search marketing agency, Greenlight.

Smartphone users already feel the pull of their respective appstores as much as, if not more than, a traditional search engine, Bunn noted. What this means is that instead of visiting a Web site to seek out answers, mobile users will install an app which can answer a question and other similar questions in the long term, he said.

With the mobile arena growing in importance, he predicted that search engines in 2011 will start to recommend apps that may be relevant or contextually associated to a user’s search query.

“Mobile apps will manifest as another type of vertical search [and] be pulled into the normal results as a universal search element,” Bunn pointed out.

James Roy, senior analyst with China Market Research Group, said there is significant room for growth in search centered around specific apps for mobile devices such as smartphone and tablets.

Rather than accessing a mobile Web browser, users will find it easier to use a specific app that can list restaurants in the vicinity of their location and yield search results within this context including relevant reviews, addresses and phone numbers, Roy explained in an e-mail.

He added that the overarching trend for search engine activities this new decade is focused on being “faster”–that is, delivering results in real-time–and “smarter”, yielding more relevant results.

The two factors of speed and intelligence in search are closely intertwined, he said, noting that people do not want to wait as they search, and they do not want to waste time on the wrong search.

Social search will continue to grow in importance because it is “smarter” in identifying results more likely to be of relevance to a user, based on social connections particular to that user, Roy said.

Besides integrating social data to search, such as the Facebook-Bing partnership, he said social search will also be increasingly prevalent in the form of question-and-answer (Q&A) services or networks as seen in the rise of startups such as Formspring as well as Internet bigwigs that include Facebook, which launched Questions, and Google’s Aardvark acquisition.

With Q&A-powered social search, he explained that users do not need to be familiar with using the right search terms or keywords to get better results and the results are easy to sift through.

Traditional search still important
Roy acknowledged that audio and picture search services such as Shazam and Google Goggles are useful in very specific cases and can help “fill gaps that traditional search isn’t very good for, like identifying a photo or music”.

However, he also pointed out that such search features are more suitable for occasional than everyday use, and will probably continue to be used only in niche areas.

He noted that traditional search–characterized by keywords and algorithms–will be the mainstay of mainstream users.

“I’m not convinced that traditional search will be overtaken by more specialized searches because the vast majority of searches still start with an idea that is best conveyed through words than through other media.”

Greenlight’s Bunn concurred that traditional search will “remain as important as ever” in the new decade but will continue to evolve in terms of the types of results it will produce. These will be more diverse and include news, videos, products and places, apart from the usual Web page results, he added.

Social links get higher billing in Google

Google’s putting a little more attention into social cues when it comes to returning search results.

Over the course of the day Google will start rolling out new social search features that more prominently display content that connections on social networks like Twitter have shared. Google’s been doing that for a while, but in the slums of the search results page: all the way at the bottom.

Now those results will appear interspersed with regular search results when you’re signed into Google and someone on a social network that you have connected to your Google profile shares a link, with a note under the result telling you who shared the link and where. Twitter seems to be the big winner here, but any account linked to one’s Google profile can be featured in results.

Those results won’t be displayed to all searchers: you’ll see individual results when signed into Google based off of friends and connections within the Google world (Gmail, Chat, Google Buzz) who publicly share sites through those services or externally linked services like Twitter or LinkedIn. Google’s also making it possible for users to privately link accounts to their Google Profiles.

It’s all part of Google’s ongoing and mostly fruitless attempts to make social-media connections a greater part of its search results. One of Google’s biggest priorities at the moment is finding a way to stay relevant as an information source as more and more people share information in social networks, and as more and more sites try to game Google’s results.

This is a long-term problem, but it’s a problem nonetheless that is getting a lot of attention internally. One huge issue is the closed nature of Facebook, the king of the social-media world: Google’s all-seeing Web crawlers can’t penetrate Facebook’s services and that has caused tension between the two companies.

Google is expected to roll out more social services over the coming year, having discussed plans to add social layers to existing products as opposed to trying to build a network of its own. Past attempts at that–such as Orkut and Google Buzz–haven’t made an impact.

The coming fight over .gay domain

newsmaker SAN FRANCISCO–Scott Seitz has the dubious distinction of proposing what might become the most controversial new top-level Internet domain: .gay.

Seitz, the chief executive of dotGAY, is the founder of SPI Marketing, which bills itself as a “full service” gay marketing, public relations, and event planning agency. Clients include Absolut Vodka, American Express, Subaru and Travelocity; campaigns included a Ru Paul drag race.

Now, as soon as the application period begins, Seitz is planning to ask the Internet Corporation for Assigned Names and Numbers, or ICANN, to approve .gay. At least 115 proposals are expected, including .car, .health, .nyc, .movie, and .web.

Controversial Internet suffixes have a history of suffering the geopolitical equivalent of being referred to a committee that never reaches a decision. An entrepreneur named Stuart Lawley applied for the rights to run .xxx in 2004, and thanks to opposition from the Bush administration and nations including Brazil, it still has not been approved.

That could happen again. As ZDNet Asia’s sister site CNET reported last week, the Obama administration is quietly seeking the power for it and other governments to veto future top-level domain names; that proposal will be incorporated into a so-called “scorecard” that’s expected to be released in the next few days. Milton Mueller, a professor of information studies at Syracuse University and author of a new book on Internet governance, says conversations with government officials in conservative Arab countries have made it clear they’ll try to veto .gay.

CNET sat down with Seitz last week at the .nxt conference, organized by longtime ICANN-watcher Kieren McCarthy, where scores of hopeful applicants gathered to figure out how to raise money and piece together a compelling application. ICANN is expected to finalize the process during its March or June meetings.

Q: How did you get involved in .gay?
Seitz: I was in sales and marketing at Kodak and then went on to Pepsi. Alexander (Schubert) created the concept for all of this. He was reaching out to a number of people in the gay and lesbian community. It wasn’t just me. He was very public about finding someone as a partner for .gay, the need to find someone in the community as an owner in that effort.

I was really shocked to see that we’re getting ready to see the Internet reborn again in a very different way. And that such a limited number of people were even aware of it. I got involved because I saw what the opportunity was for the gay community. A lot of communities including ours are in flux right now.

Hasn’t this been happening for a while?
Now it’s a much more integrated community in many ways. As the community has become more integrated, it’s become more difficult to reach the community in media, because you have more choices than you had before. .gay will be a venue for enhancing our ability to interact with each other as a community. It also became a global networking opportunity, linking community centers…

But you don’t need a top-level domain to network community centers, do you?
I disagree. Instead of what most people would do, which is go out and sell your top categories, travel.gay, doctor.gay, hiv.gay, bar.gay, we’re keeping them. And they’ll become an index to the community globally.

You already own dotgay.com. Why not just create travel.dotgay.com, and so on, without applying for a new top-level domain?
It’s not the same–you’re subject to whatever .com is subject to. (Instead we’ll be) in the island of .gay, which will have its own policies and be able to police people who are abusive. There’s a big difference between being a site versus a place where multiple sites can exist.

Earlier you said, “We’re going to have to have a filtering process in advance that puts us in place to authorize that Web site.” You have antigay groups out there. How would you filter them or their remarks in practice?
We’re working on that. We want to limit filtering. But we want to be sure we’re filtering appropriately.

It’s not going to slow down your ability to lock down a name you choose. You can go to your registrar, lock down ted.gay. But then you’ll be put through a screening process that will ask what ted.gay is for. Much of this can probably be automated.

Will someone be able to post content that’s legal but offensive? Where do you draw the line?
This is part of the process that we’re developing. That’s the exact certain type of person we need to find a way to have localized on the site.

Like if I have to check a box saying I’m over 18, maybe you have to check a box saying that I’m recognizing that this content is potentially unfriendly to the gay community. Yes, the ex-gay community will want to be on the site. The Mormon Church will want to be on the site.

Let’s say I wanted to register ex.gay. Would I be allowed to?
There are two things to that. We’re putting together a policy group. This isn’t just going to be me saying in this interview how it’s going to happen. We can work with some of the best organizations–GLAAD, Lambda Legal. They can help us find a way to filter these people. And help us when they’re going to turn around and sue us. I think we have to assume that’s going to happen.

Second, as a community we really object to filtering in general. But how do we avoid subjecting people to the same type of mental abuse they’ve been subjected to in the general market?

There’s another group, the .GAY Alliance, that also may be bidding for the rights to run .gay. Have you been in touch with them?
They haven’t been active in the site for over a year, so I don’t know what they’re doing right now. I’m open to any conversation with anyone who has a genuine understanding of this and the community. If they’re interested, I’ll make a call.

Doesn’t it make sense to make that call now? Otherwise you might be bidding against each other, with the only beneficiary being ICANN.
It certainly does. If it’s someone who’s working together to benefit the community, I’ll definitely talk to them. I think that’s the spirit of what ICANN tried to do.

If you’re running this as a community service, how do you expect to make enough money to cover your US$185,000 application fee, plus ongoing costs?
It’s really going to be a hybrid not for profit and for profit–that’s really the vision. There is a business plan in place. There is, I believe, a way to have a happy middle of the road. Our goal is to reach out to the initial community that’s out, including the gay and lesbian business community.

Can you give some examples? You’re thinking of companies that are gay-friendly?
We’re being endorsed by the National Gay and Lesbian Chamber of Commerce, the International Gay and Lesbian Travel Association, Damron Guides. I think we’re going to launch with that. We’re going to be charging a premium because our single goal in this space is to fill it with people–and not domain grabbers who would make it a big parking lot.

How much will you charge per domain name?
This stuff is still being considered. But I don’t think it’s unreasonable to look at US$50 to US$100 per domain name per year.

This is a solid business model that’s pretty simple and straightforward. We know that if we’re being responsible, we should be able to give a lot of that back to the community. If we had to pay a nutty amount of money to own .gay and pay that debt off, it would be much more difficult to make a profit… Our endorsements are not only endorsements but also get us out to parts of the community. Our endorsements are also a huge marketing tool for ICANN.

Why do you think the Obama administration told me that it neither supports nor opposes .gay?
I don’t think they have any idea what they’re dealing with. What’s been pervasive is that unless you’re attending ICANN meetings or you’re really a hard-core fan of technology, you don’t know what this is about.

The reason we don’t hear that much more about larger organizations filing for their own top-level domain is that the legal department sees it as a threat and the marketing department doesn’t know it’s an opportunity.

What do you think of the Obama administration’s recommendation, through the Commerce Department’s NTIA, that governments be able to lodge a veto of proposed new top-level domains “for any reason”?
It’s problematic, and it’s discrimination on a terrible level. It’s not even appropriate for countries (to have the ability to veto) because of freedom of expression. Anything beyond (restricting speech that) incites violence is discrimination.

How about funny or provocative domain names? Will you allow anti.gay? Imonlysometimes.gay? Thatsso.gay?
I think funny domains are a great idea.

What do you think of the argument that some of the most antigay countries, including some that have death penalties for same-sex sexual activity, might not object to .gay because it’ll be easier for them to block?
I don’t think they’re going to welcome it. Whether they should be quite as afraid of it as they are boggles my mind. It makes it easier for them to block. It also makes them stand up and identify that they’re discriminating against the gay community in a very physical way.

Our goal would be to get that conversation going. There are places where that kind of conversation isn’t going to go anywhere. There will be a number of people who choose to block us. But if we can mobilize the rest of the global gay community instead of just New York, London, San Francisco, Berlin–that’s a bigger place to come from. And if we’re coming from there with real numbers and real economic power, then maybe we’ll have a better dialogue.

Yahoo: SEA a potential hotbed for mobile advertising

SINGAPORE–Advertisers in Southeast Asia have not yet fully embraced the mobile platform, but this will change in the near future as the region presents compelling opportunities for mobile advertising to take off, noted a Yahoo executive.

Tommaso Del Re, Yahoo’s head of mobile and business development for Southeast Asia, told ZDNet Asia that the bulk of advertisers in Southeast Asia are still looking at a combination of desktop and mobile Internet, and not just mobile alone, for ad placements.

“Mobile advertising is still in its early days,” he said, citing the dominance of traditional mediums such as TV, print and outdoor. Yahoo, according to him, has spent the last 15 years trying to educate advertisers to go digital. Del Re was speaking at an event the Internet giant held here Wednesday to launch four new online ad formats, three of which are solely for mobile devices.

Despite the current fledging state of mobile advertising, Del Re said that he predicts the mobile platform will be the go-to medium for ad placements, as more established, blue-chip brands look for a “special engagement with consumers”.

Del Re identified three main drivers that will fuel the mobile advertising trend in the region: the introduction of faster networks such as 3G; falling data costs; and a wider availability of Web-enabled handsets, not unlike the Apple iPhone, that are just as “powerful” and also “cheaper”.

Combined together, these three factors means users get “better experiences when on mobile”, he iterated.

Prajit Prakash, Yahoo Southeast Asia’s custom ad products manager, also said the region has “compelling opportunities” for mobile advertising.

Using the example of Indonesia, Prakash pointed out that the country has a low fixed-line or broadband penetration, meaning it is easier for consumers to access the Internet on their mobile devices such as a smartphone. Increasingly cheaper mobile devices have also fueled the trend of mobile Web usage, he added.

Furthermore, the existence of several value-added mobile services such as SMS news alerts and mobile banking services, have aided the adoption of mobile Internet, said Prakash.

Different platforms, devices a challenge
Del Re acknowledged that it is a challenge to develop multiple ad technologies and formats for the various mobile platforms and smartphones available in the market today, and likened the effort to having to produce different engines for diesel, petrol and electric cars. In spite of this, the effort is “justified”, he said.

According to him, having to address the challenge of multiple platforms offers Yahoo the opportunity to “ask [ourselves] what kind of product we can offer” and take on the perspective of the consumer. For instance, if the user’s device is an Apple iPhone, “our frontpage would very rich and interactive”, but on a feature phone, the offerings will be more “bare bones” with less rich media and images, he explained.

Currently, Yahoo’s newly-launched mobile ad formats are only available on the iPhone. Prakash said that formats for Google’s Android platform is in the pipeline, but could not offer a more definite timeline.

The mobile ad formats launched Wednesday are:

  • An “Expandable Banner”, which offers a larger canvas by expanding downward and covers nearly two-thirds of an iPhone screen;
  • An “Adhesion Banner” that remains fixed at the bottom or the side of the screen as the user scrolls through content;
  • A “Click to Video Banner” that shows a video clip when a user taps on the ad. It will play full-screen on a native video player and once it is finished, users are brought back to the Web site.

“Social Ads”, available on both desktop and mobile form factors, are social widgets that support existing online advertising campaigns. The format allows users to comment and share an ad on social networking sites such as Facebook and Twitter.

Asian biz take to social platforms, but not completely

Fortune Global 100 companies from Asia are most open to embracing social networking platforms but few are able to leverage this to better engage online stakeholders, according to a new study released Wednesday.

Conducted by public relations consultancy Burson-Marsteller, the study determined that 50 percent of Asian companies on the Fortune Global 100 list had Facebook pages, compared to 40 percent in a similar study last year, and the average number of “Like” on each of these fanpages grew by 406 percent to 121,257.

In fact, Asia saw a 25 percent increase in Facebook engagement, compared to 18 percent globally.

The survey findings were based on data collated between November 2010 and January this year from the social media activities of Fortune Global 100 companies on platforms including Twitter, Facebook, YouTube, corporate blogs as well as regional and local microblogging and video-sharing Web sites. Eighteen of these organizations were Asia-Pacific, while 32 were from the United States, 47 from Europe and three from Latin America.

According to the study, 67 percent of Fortune Global 100 companies in Asia had Twitter accounts, up from just 40 percent in 2010, with 77 percent of them using the “@” mention function to communicate with other Twitter users and 62 percent retweeting content to followers.

Despite their presence on social networking platforms, however, Asian Fortune Global 100 companies were not leveraging these tools to engage with online stakeholders, noted Burson-Marsteller.

Some 60 percent of these organizations allowed fans to post on their Facebook pages, but only 28 percent routinely responded to such posts.

Globally, 74 percent of companies on Facebook allowed fans to post on their walls, with 57 percent actively responding to posts and comments. U.S. companies led in terms of Facebook interaction, with 89 percent allowing posts and 72 percent responding to these comments.

Burson-Marsteller Asia-Pacific President and CEO Bob Pickard said in the report: “While the increase in social media adoption in Asia is in part due to greater investment in this area for local marketing, much of the growth is driven by established Asian multinationals using social media to reach new audiences abroad.

“We expect to see this trend continue as other Asian companies become more comfortable with the interactive nature of social networks and take the opportunity to engage their stakeholders directly on these platforms,” Pickard said.

Across the globe, Twitter engagement saw the highest growth, from 65 percent last year to 78 percent. This is followed by YouTube, from 50 percent to 57 percent, and Facebook from 54 percent to 61 percent.

In addition, 80 percent of Fortune Global 100 companies were mentioned by Twitter users, compared to 42 percent in the previous year.

Some 67 percent of Asian businesses were on at least one social media platform, up from 50 percent in the previous year.

A report from XMG last month projected that the social media presence of Asia-Pacific companies will grow this year, with over 30 percent of small and midsize businesses expected to use such platforms for marketing purposes.

Another study from comScore last August revealed that Indonesia had the world’s highest Twitter penetration rate, with 20.8 percent of Indonesians visiting Twitter.com in June 2010. Asia-Pacific ranked as the second-fastest growing region, climbing 243 percent to 25.1 million Twitter users.

Google unveils anti-content farm Chrome tool

Google has launched one of its first experiments aimed at fighting back against content farms, asking the public to help identify the worst offenders.

Chrome users can now download an extension from Google called Personal Blocklist that will allow users to block certain domains from appearing in a personalized list of search results. Google will also track the domains that users flag “and explore using it as a potential ranking signal for our search results”, wrote Matt Cutts, principal engineer at Google and a prominent antispam spokesman for the company, in a blog post.

For several weeks Cutts and Google have been acknowledging frustration over the proliferation of content farms in Google’s search results, or sites that write content for really no other reason than to appear within search results and draw traffic from Google. Most often that content is poorly written and sometimes nonsensical, as site editors try to understand what people are searching for on Google and commission low-cost posts with enough keywords to show up on the first page of results.

The product may not be pretty but it can be lucrative, as sites like Associated Content and Demand Media look attractive to content companies like Yahoo and investors. Last month Cutts vowed that Google planned to take action in 2011 against such sites, previewing the user-generated blocklist concept as a similar idea to a user-generated spam-labeling extension available for Chrome.

Google took great pains to label Personal Blocklist “an early test” and “experimental”, but it’s now available in English, French, German, Italian, Portuguese, Russian, Spanish and Turkish. Cutts did not say in the post how long it might take Google to amass enough data to change how blocklisted sites appear in regular Google search results.

IE9’s ‘pinning’ brings traffic boost to sites

Microsoft says a small new feature within Internet Explorer 9 is having a big impact on sites that have tweaked their code to make use of it.

“Site pinning”, which is new to this latest major version of Internet Explorer, lets users add a shortcut to a site from any page of their own to sit on their Windows 7 task bar. On the surface this would just seem like any other shortcut, except that Microsoft has provided ways for sites to boost the interactivity, like putting site-specific notifications, navigation, and information in contextual menus that sit behind the icon.

Microsoft now says that sites that have gone this extra step are seeing anywhere from a 15 percent to 50 percent increase in site visits, behavior that can be tracked back to a pinned site’s increased visibility compared to bookmarks, which are usually kept hidden within a menu inside of the browser.

“It shouldn’t surprise that much,” Brian Hall, general manager of Windows Live business group, told ZDNet Asia’s sister site CNET in an interview last week. “If you think about it there’s a reason people have competed aggressively for default home paging for years and years and years. That default home page was the thing that you saw every time you started your browser,” Hall said.

“What we enable is the ability to get out of having only one home page. And not go wonky to the level that you have to have multiple paths, which an average customer isn’t going to do for their home page set,” he continued.

So far more than 900 sites have taken advantage of the feature, meaning that they’ve added some code to their site to offer up the special features to IE9 users. That includes high-resolution icon and support for Jump Lists, which break out site-specific actions into a menu that can be accessed without hunting around for those same options on the site itself. The feature has long been available to native applications built for Windows 7, with Microsoft positioning IE9 as the first pathway for Web developers to include the functionality into their sites and Web applications.

“We have more and more sites that just continue to keep pushing it,” Hall said. “For instance when you have Pandora pinned now you’ll notice that when you’re paused and the windows is not in the foreground, you’ll see a notification that lets you know that you’re in pause.”

Others have also moved to take advantage of the feature by promoting it when users first visit using IE9. “Huffington Post is interesting. If you go to Huffington Post from IE9, it will actually prompt you to do the pinning because they know that if it’s pinned you’re going to go there more often,” Hall said. Similar initiatives have been done by mobile Web application developers with the home screen shortcut feature that’s built into Apple’s Safari browser on its iOS devices.

Microsoft also sees site pinning as a way to change the way portal-style home pages typically drove traffic to internal properties. “Let’s take a site like Yahoo, which today has obviously good home page share in the United States,” Hall said. “We could encourage people to pin Yahoo, pin Yahoo Mail, pin Yahoo Finance, and all the sudden [Yahoo] doesn’t need to try and program everything through that single piece of real estate that is the home page.”

Hall said that system encourages users to group together similar sites, or clusters of links. “If you go to 20 different sites, if you just start pinning them you get logical groupings,” he said. “So let’s say I’m doing all my research on MSN, I can have 10 links that are logically grouped here, and they’re not getting in the way of my de novo browsing section.”

But does that principle scale as users begin to pin more and more sites? Based on user behavior during the beta, that hasn’t proven to be an issue. “I think the majority of people aren’t going to have more than 10 pins,” Hall said. For those that do, Hall pointed toward simply expanding the size of the Windows task bar to double or even triple height (or width) to accommodate more pins.

“I think what you’ll find is, the more sites that do pinning, the more people want to pin. You might see more people going into double height, but that’s a problem we look forward to having,” Hall said.

Microsoft put out the first, and likely only, release candidate for IE9 last week, though the company has not said when it plans to roll out a final version. The software continues to be offered only to users of the current, and previous iteration of Microsoft’s Windows operating systems: Windows 7, and Windows Vista.

When Groupon goofs, everyone notices

Few companies have changed the e-commerce world in the recent past as much as Groupon, a local-deals broker that has gotten the nation hooked on half-price massages, discounted restaurant bills, and packages offering rock-climbing and yoga combos (though, ideally, not at the same time). It’s earned rave reviews for customer service, thanks in part to its hiring of underemployed comedians as copywriters and service reps.

Yet Groupon has taken a beating in the past few weeks–not in terms of traffic, and not from the rise of any of its several dozen smaller competitors–but just because of a few bonehead moves. Its much-talked-about television ad campaign, kicking off with a Super Bowl spot, used C-list celebrities to mock charitable donation ads, and was seen by many as so tasteless that the company pulled the plug on it.

Then, Groupon users nationwide were furious when a Valentine’s Day-themed deal with an online flower retailer redirected users to a site with jacked-up prices, rendering the discount useless and raising concerns that Groupon isn’t properly vetting its partner retailers.

The problem for Groupon isn’t that it’s making these mistakes. It’s that it’s making them as a company that, while barely over the age of two years old, sends deals to more than 60 million e-mail in-boxes, has sold more than 39 million “deals” according to new internal data, and plans to file for an initial public offering later this year. To put things into perspective, when Facebook was this age, not only was it a year prior to the social network’s landmark launch of its developer platform, it wasn’t even possible to register for Facebook unless you had an e-mail address from an approved school or business.

The rise of Groupon has been unlike anything else we’ve seen in the recent boom in tech companies for a lot of reasons, not the least of which is the fact that it rocketed smack into the mainstream without much time in the domain of insidery early adopters.

They say the Bay Area’s technology culture is a bubble–not necessarily in terms of overvaluation, but in terms of isolation. It’s more like a cocoon. Companies that grow there are, typically, entitled to a period of quasi-gestation in which they can screw up, and people will be vocal, but those who are actually noticing and listening are a relatively restricted set. Twitter’s servers used to go haywire on a near-daily basis, but the service was so restricted to tech enthusiasts that pundit Robert Scoble was its most popular user. Facebook, though founded in a college dorm on the East Coast, kept its numbers low with the e-mail address requirement and was well ensconced in Valley culture by the time it opened up the gates.

Groupon, firing out e-mail messages to the Deep South and Mountain West and Mid-Atlantic from its sprawling headquarters in Chicago, was not afforded that privilege. It certainly has a quirky startup attitude–but that’s exactly what clashed with the “real world” when the offbeat humor of its Super Bowl offended the mass market. And it seems to be grappling with the tech-industry vision of being a platform rather than a media company, connecting advertisers with customers while remaining the universally appealing brand humming away in the background. The heavy publicity surrounding the bogus flower deal last week might hint to some that Groupon is focusing too much on being everywhere and losing its focus on the quality of its content.

In contrast, when Facebook released its disastrous Beacon advertising product–in late 2007, when it had roughly the same number of users that Groupon does now–its users largely didn’t notice. The product was launched at a small press conference in New York, subsequently ripped apart by the press, and was watered down within weeks. The average Facebook user likely never even saw a Beacon ad; true mainstream interest in the company’s inner workings didn’t take off until well over a year later. Facebook was still very much in the cocoon.

Groupon’s got a bit of a catch-22 on its hands. It’s big and obviously proud of its choice spot in the mainstream and outside-the-Valley attitude–could you ever see Facebook, Twitter, or LinkedIn buying a Super Bowl ad? And Groupon is big enough for the occasional slip-up to have truly visible reverberations. Still, it’s not so big that a PR crisis could be swallowed up by the sheer size of the rest of the company. Google’s launch of Google Buzz was disastrous, but the fate of the Mountain View, Ca., conglomerate was hardly resting upon the lightweight, experimental product. Caught in between these two phases of development, Groupon is like a kid who grew too tall too quickly and now finds that everyone notices when he trips or hits his head on things.

The good news for the company is that its loyal users seem to have gotten over the Super Bowl revulsion pretty easily, and that CEO Andrew Mason’s frankness about pulling the ad campaign seems to have helped. YouGov, a research firm that measures “brand perception”, plotted the positive and negative buzz about Groupon on its scale of 100 (very positive) to -100 (very negative), and found that in the days following the Super Bowl ad, Groupon’s score fell from 14.4 to 5.3 but then shot back up to 26.6 after Mason wrote an apologetic blog post.

YouGov hasn’t yet measured the change in Groupon brand perception in the wake of the botched flower deal.

Both follies are high-profile shortcomings that customers ought to forget about the next time they see a killer deal that they simply can’t pass up. One of Groupon’s many competitors, LivingSocial, might have something to say about that: While it has never put forth an explicit “we’re better than Groupon” message, it’s proven remarkably savvy at weaseling its way into situations where it’s getting pitted against the bigger site. On Super Bowl Sunday, LivingSocial purchased a pre-game ad–something that probably wouldn’t have happened if Groupon hadn’t been advertising during the game.

Previously, LivingSocial had raised a bucketload of funding from Amazon.com and then, perhaps thanks to the new business relationship, offered a killer Amazon.com deal and attracted plenty of new members in the process. In late January, traffic firm Experian Hitwise reported that after the Amazon deal, LivingSocial went from pulling in one-tenth the traffic of Groupon to nearly half.

But in aiming straight for Groupon’s market, LivingSocial is in the same spotlight. The moment it messes up on something–and knowing young technology companies, it will–that’s going to be Groupon’s gain.

Either way, it’s a rare look at the rise of a Web company that’s grown outside the industry’s famed bubble, or cocoon, or whatever you want to call it.

MPEG LA patent move blemishes Google’s Web video plan

A serious complication has just emerged for Google’s plan for high-quality, patent-free, open-source video on the Web–but Google also revealed plans today to try to counteract it.

MPEG LA, an organization that licenses video-related patents related to a variety of standards, has formally requested for patent owners to inform them of patents they believe Google’s VP8 technology uses.

In “offer[ing] to facilitate development of a joint license to provide coverage under essential patents,” MPEG LA is taking a major step toward actually offering such a license.

That might reassure some players who are interested in VP8 and its related WebM video-streaming technology, easing licensing deals that otherwise could involve many companies in the process. But it also would raise doubts about whether makers of browsers, mobile phone, processors, and cameras would be free to use the technology without signing such a license.

And that could drag down Google’s ambition to make “a high-quality, open video format for the Web that is freely available to everyone.” VP8 is a codec, technology to encode and decode video, and paired with the Vorbis audio codec, forms the WebM Project.

MPEG LA had this to say last week:

In order to participate in the creation of, and determine licensing terms for, a joint VP8 patent license, any party that believes it has patents that are essential to the VP8 video codec specification is invited to submit them for a determination of their essentiality by MPEG LA’s patent evaluators. At least one essential patent is necessary to participate in the process, and initial submissions should be made by March 18, 2011.

Google pooh-poohed the move and said it’s moving to marshal allies that agree with its viewpoint, though:

MPEG LA has alluded to a VP8 pool since WebM launched–this is nothing new. The Web succeeds with open, community-developed innovation, and the WebM Project brings the same principles to Web video. The vast majority of the industry supports free and open development, and we’re in the process of forming a broad coalition of hardware and software companies who commit to not assert any IP claims against WebM. We are firmly committed to the project and establishing an open codec for HTML5 video.

Just a few weeks ago, MPEG LA made it clear which way it thinks the VP8 patent wind blows.

“We do not believe VP8 is patent free,” MPEG LA told ZDNet Asia’s sister site, CNET. “There continues to be interest in the facilitation of a pool license to address the apparent marketplace desire for convenience in accessing essential VP8 patent rights owned by many different patent holders under a single license as an alternative to negotiating individual licenses.”

Mozilla, a strong ally in the effort to establish a royalty-free video codec, declined to comment for this story.

VP8’s biggest competitor, H.264, is used by many companies that pay royalties to MPEG LA. MPEG LA offers licenses on behalf of patent holders for such technology, returning royalty payments to those companies.

Microsoft found its ambitions thwarted in a similar way years ago. It tried to establish a Windows Media Player-based video codec called VC-1 as a standard, but in 2007, MPEG LA stepped in with a patent pool of its own.

Music service Pandora files for IPO

Internet radio company Pandora filed a registration statement last week to go public, according to a release. The number of shares to be issued and pricing information has not yet been determined, but the underwriters of the IPO are investment banks Morgan Stanley & Co. and J.P. Morgan Securities.

The wildly popular Pandora, which uses a “music genome” algorithm to create custom radio stations based on a single song or artist and offers paid subscriptions as well as a free, ad-supported version and a suite of popular mobile apps, has had a spectacular rise as well as a brush with death when it appeared that licensing fees might doom it entirely. In fall 2008, delays on Capitol Hill meant that crucial legislation about royalty fees Internet radio stations pay might take so long that a company like Pandora could run out of money first.

But the decision was favorable, and chief technology officer Tom Conrad said last year that the company recorded its first profitable quarter at the end of 2009.

It’s been one of the few success stories in the digital music world, which over the past decade has been littered with financial failures and piracy-related lawsuits. Now, it’ll join a mini-boom in technology IPOs, entering the ranks of Demand Media, which went public last month, and LinkedIn, which filed for an IPO shortly thereafter. A handful of others, like Facebook–currently valued at US$50 billion–and Groupon, are said to be waiting in the wings.

Egypt, Twitter, and the rise of the watchdog crowd

There were two critical masses that led to the resignation of Egyptian president Hosni Mubarak last week: One was the horde of protesters who flooded Tahrir Square in the country’s capital of Cairo for two weeks. The second was the fusion of millions of observers, pundits, and supporters around the world into a sort of leaderless digital watchdog, an unwavering force that ensured the international eye would not stray from Egypt.

It’s the latter where we can credit social media.

We shouldn’t go so far as to call this a social media revolution, but it nevertheless is arguably the first time in history that we’ve seen Facebook and Twitter, a crucial part of the way we now communicate, speedily and successfully conveying the ideas and beliefs that do lead to a revolution. More importantly, social media makes this all happen in a public forum with the rest of the world watching, something that made it possible for Egypt to be in the middle of a massive international spotlight, emotionally empowering those on the ground and strengthening the pressure on Mubarak’s regime with a force that came not from world leaders but from the sheer size of the crowd.

“Social media didn’t cause this revolution. It amplified it; it accelerated it,” said Ahmed Shihab-Eldin, a producer for news network Al-Jazeera English, in a panel about Egypt and social media that was held last week at Google’s New York office as part of the Social Media Week conference series. “It’s important to notice that in a very short period of time there have been two revolutions, so to speak.”

Egypt was the second of those two. The first one, an uprising in nearby Tunisia that saw its government ousted, was crucial to Egypt’s for many reasons, not the least of which is the fact that it permitted the world to watch what was unfolding in Egypt from its beginnings. That early attention was what let the “global watchdog” get as powerful as it was.

Here’s why: All too often, political turmoil is only highlighted in the mainstream when it’s well under way rather than in its infancy, and to use an analogy that’s slightly inappropriate in its levity, the level of popular interest outside the region is often akin to that of an audience member who walks into a movie theater halfway through the film. No real emotional connection is made to the subject matter, interest peters out quickly, and the political situation disappears from the mainstream media.

But in Egypt, which was in the spotlight from the start because news outlets had already begun covering the situation in Tunisia, the audience outside Egypt was treated to the full story from the revolution’s earliest hours. The Twittering masses were captivated and would not be satisfied until there was some kind of conclusion to the story. This is a story with a beginning, a plot, a cast of characters (witness the rise in prominence of then-detained Google executive Wael Ghonim over the past two weeks), and the global desire to produce a satisfying end.

That amplified audience would not have been able to grow so powerful without social media’s unprecedented reach and ability to fuel a more or less infinite amount (server power willing) of real-time news.

This is particularly important to note because it was among those outside Egypt that social media may have had the most profound impact. In the same panel discussion last week, filmmaker and writer Parvez Sharma emphasized that while millions of people were tuning into Twitter for Egypt updates, few of them were actually on location even before the Mubarak regime began cracking down on Internet access.

“There’s 80 million people in Egypt, and almost 40 percent are below the poverty line,” Sharma said. “Cell phone penetration is incredibly high, but the majority of the cell phones are not smartphones. A lot of the information that was getting out was from a very small critical mass of people that were able to tweet out of Egypt. Friends of mine in Cairo estimate that it’s less than 200 people who were tweeting from Cairo.”

Sharma continued: “I think it’s been incredibly condescending to diminish, if you will, what was an incredibly popular revolution the likes of which the Arab world has not seen, perhaps the whole world has not seen, and just to say that it was a Facebook event or a Twitter event.”

Social media did not make the revolution in Egypt happen. But, with every step chronicled in real time and broadcast to anyone with an Internet connection, it hastened its pace and transferred the voice of international scrutiny from sovereign leaders to a community of millions. When it comes to pressuring an authoritarian leader to step down, the heat has never been turned up so quickly.

As entrepreneur Habib Haddad tweeted about the whole thing, “Social media has lowered the cost of revolution.”

MPAA sues Hotfile, battle for cloud begins

For the first time, a group of Hollywood film studios has filed a copyright lawsuit against a cyberlocker.

File-hosting service Hotfile has made a business out of offering a stash box for people to store their pirated movies, the Motion Picture Association of America claims in its suit against Hotfile.

“In less than two years, Hotfile has become one of the 100 most trafficked sites in the world,” the MPAA said in a press release issued today. “That is a direct result of the massive digital theft that Hotfile promotes.”

According to the MPAA, Hotfile is operated by Florida resident Anton Titov, who was not immediately available for comment.

A growing number of digital-locker services have come under fire lately by copyright owners. Liberty Media Holdings, an adult-film studio, last month also filed a copyright suit against Hotfile. On the music side, EMI, the smallest of the four record labels, is suing MP3tunes.com, a digital locker specializing in the storage of songs.

The cyberlockers are an alternative to BitTorrent file-sharing services and are growing in popularity. With these services, there’s no need to download any software. A user logs on to a locker service and watches whatever films or TV shows are stored there.

The MPAA was careful to make the distinction that not all cyberlockers are unlawful. That’s important because the Digital Millennium Copyright Act’s safe harbor protects Internet service providers as long as they obey some rules. The trade group for the top film studios said Hotfile doesn’t come close to qualifying for safe harbor protection.

The service “openly discourages use of its system for personal storage”, the MPAA wrote. “Hotfile’s business model encourages…users to upload files containing illegal copies of motion pictures and TV shows to its servers and to third-party sites.”

According to the MPAA’s suit, Hotfile is no free-information advocate. This is straight-up piracy for profit, the trade group said. Hotfile collects revenues by charging a monthly fee.

School filters coddle kids, are ineffective

Internet filters in schools often compromise a teacher’s ability to teach, yet at the same time are easy for tech-savvy students to get around, a parliamentary committee on cyberbullying has heard.

The Federal Parliament undertook a cyber-safety committee late last week to investigate community concerns about protecting children from bullying online and the measures that could be used to prevent it, such as Internet filtering.

Philip Lewis, principal at Gleeson College and chair of the Association of Principals of Catholic Secondary Schools, told the committee that the rise of mobile phone use by school-aged children made Internet filtering ineffective in schools.

“The…problem that schools have is that while we put lots of filters on our networks, the more recent developments of being able to access data and the Internet through phones makes it even harder for schools to police that,” he said. “Even though it does not happen on our network it is happening during the day.”

Mary Carmody, senior education adviser with the Catholic Education Office, said there was also a fine line between protecting children and removing a valuable teaching resource from the teachers.

“There is that balance between having enough open access so that we can engage in really contemporary learning and enough restrictions so that we can provide some security for young people,” she said.

This was a view shared by Mary Campbell, associate professor, Australian University Cyberbullying Research Alliance. Campbell said the Internet is like a pool; you can build a fence around it but that doesn’t mean you don’t teach your children how to swim.

“It does not mean that you do not actually educate them about water safety in other areas,” she said.

She went on to add that filtering of any kind can’t prevent cyberbullying. “I cannot see how you can stop me going on to Facebook and bullying somebody else by any type of filter, because you cannot say that you are never allowed to say the word ‘loser’, or ‘You are not invited to my birthday party’ or all of the horrible things that people can say. They cannot be filtered out, because they are normal children’s language.”

Campbell said that children will also eventually work out ways around the pool’s fence.

“For pornographic sites, when children are older, they will use a proxy server to access pornographic sites if they want to because anybody with any technological expertise can, anytime you put filters on, get around them.”

Mandatory ISP-level filtering
The government’s planned mandatory Internet service provider (ISP) level filter was met with criticism by Associate Professor Karen Vered from the Flinders University Department of Screen and Media, who told the committee that hiding the Internet from children would not be an effective countermeasure to reduce issues like cyber bullying.

“Clean feed and things like that are not going to help young people to develop their ability to discriminate, to evaluate and to act under circumstances that require them to exercise their own judgement,” she said. “If we do not give them a chance to exercise judgement, to practice that, how will they develop that skill?

“The emphasis going forward needs to be on education and experience for children and young people, while respecting their interests, their autonomy and their agency.”

However, the filter was welcomed by several groups who spoke to the committee. Professor Elizabeth Handsley, president of the Australian Council on Children and the Media, said the filter would be good for parents unwilling to install their own filter.

“We realise there are political difficulties with Internet filtering at ISP level, but from our perspective it is a useful tool. It will minimise the risk to children where parents do not have the wherewithal or the desire to install their own filter,” she said.

Rosyln Phillips, national research officer for Christian lobby group FamilyVoice Australia, said the government’s planned filter would protect her children in areas out of her control.

“My problem is that as a parent I can control what happens in my home but I cannot control what happens outside,” she told the committee. “Even though my own children may not have a mobile phone with Internet access, their friends are likely to have one. Just having parental filters in the home is not a solution because for some of the worst material you need Senator [Stephen] Conroy’s mandatory filter.”

This article was first published on ZDNet Australia.

Facebook mulls limiting access for ‘bullies’

Facebook’s map of the relationships between individual users is being brought to bear to filter out abusive and fake users from the social network.

The 500 million-strong social network is trialling a number of features to discourage people who bully others, post spam or contravene the site’s policies in other ways, a manager for Facebook’s public policy team told journalists on Tuesday.

For example, Facebook is experimenting with actively banning members from parts of the site if they have been abusive. This includes measures such as “shutting [them] off from using [features] such as creating a group or maybe joining a group”, Simon Axten said.

Read more of “Facebook mulls limiting access for ‘bullies’” at ZDNet UK.

Users should protect own privacy

Individuals have to take their own steps to manage their personal data but how much power and provision they have to do so remains in question, according to industry players and advocacy groups.

Graham Titterington, principal analyst at Ovum, described user privacy as a fundamental component of the human psychology as well as a symbol of trust and intimacy. For these reasons, it is still “a major issue” in today’s Web 2.0 society, he told ZDNet Asia.

Furthermore, Titterington said, most forms of cyber authentication use personal information. “If privacy is dead, so is online commerce in the long run,” he said, adding that many Internet companies have a business model that is totally dependent on users’ personal information.

The Australia-based analyst, though, emphasized the users’ role in protecting their own privacy. He noted that many consumers have a “false idea that everything online is free” and do not realize that the content still needs to be paid for in some form or another.

“Ultimately, it is up to users to determine whether they want a free Internet or control over their information,” said Titterington. “[Consumers will] just have to look after themselves… [Businesses] cannot be relied upon to act in the best interests of their customers.”

In an e-mail interview, Singaporean undergraduate Rachel Goh related how online users previously had more control over what personal data to provide in exchange for services they want.

“Our privacy, today, is no longer one where you have some clue or control of what is being done with the personal information you consciously give to Web sites so that you can carry out online activities, like checking your e-mail or paying your bills,” Goh said.

“Now, sometimes you don’t even realize information is being taken from you,” she noted, referring to the previous privacy breaches involving Apple iPhone mobile apps and Google’s Street View.

After admitting its Street View cars had harvested personal data from unsecured Wi-Fi networks, Google could face charges in South Korea where local authorities had indicated plans to bring the case to court. Governments in United States, Australia and Germany, where the Internet giant faced similar scrapes, had decided not to pursue the matter.

In an e-mail statement, a Google spokesperson told ZDNet Asia: “While we have repeatedly acknowledged that [the data collection] was a mistake, we believe Google did nothing illegal in Korea, and we are working with the relevant authorities including the Korean Communications Commission and the police to respond to their questions and concerns.”

She added that besides ceasing all Wi-Fi data collection from its Street View cars, Google has also made several changes to create stronger privacy controls including simplifying its privacy policies, and appointing a privacy director, Alma Whitten, to ensure its products and internal practices contain effective privacy controls.

Provision of privacy controls limited
While Goh agreed that online users should take steps to protect their own privacy and not rely on a third-party to do so, she noted that there are sometimes limited tools and controls available to allow users to manage their privacy.

This is an area some Internet companies such as Facebook have said they constantly seek to address.

Kumiko Hidaka, global communications manager for the social networking site, said: “User control is something we seriously think about when developing Facebook’s features in general”.

She told ZDNet Asia in an e-mail that Facebook users can “control exactly what they want to share and with whom they want to share it”. Hidaka said the site offers a “comprehensive set of privacy tools” to allow a granular level of control that individual pieces of information can be targeted to, or restricted from specific persons.

For instance, referring to Facebook’s phototagging feature which uses facial-recognition technology, she said there are controls which allow users to manage who can view photos in which they have been tagged.

The social network, however, had faced much criticism over its privacy practices.

Quizzed about critics of the company’s opt-out privacy controls, Facebook’s Hidaka replied that the site offers a set of recommended settings as default configurations, and people can choose to share their information with friends, friends of friends, everyone, or to a customized list of friends.

However, privacy advocate Beth Givens said “opt-out is a very imperfect vehicle for protecting personal privacy” where individuals have to take the initiative to opt out of the use of their personal information.

The founder of Privacy Rights Clearinghouse, Givens noted in an e-mail interview: “Privacy is a human right… Like all human rights, the individual should not be burdened with protecting one’s privacy.”

She added that companies should instead allow consumers to opt-in to data collection and behavior tracking.

Law not keeping up
Givens, who is based in the U.S., said laws in the country are inadequate to prevent identity theft and curb stalking or provide individuals true privacy protection.

The judiciary courts want to see “harm” to find cause for breach but it is difficult for a user to show a direct correlation between the company that shared private data and the harm the consumer experienced, she explained.

Bryan Tan, a Singapore-based tech lawyer who runs his own practice, Keystone Law, noted: “There is no presumption of privacy in privacy legislation.”

In an e-mail interview, Tan said commercial forces have every incentive to lobby against privacy legislation. “Until lawmakers feel that someone is overstepping his boundaries, lawmakers will only enact legislation to counter what needs to be countered,” he said.

Nonetheless, Ovum’s Titterington described proponents of user privacy as fighting “a hard, [but] not losing, battle”.

“The death of privacy has been predicted for many years, but it hasn’t gone without a continuing fight,” said the analyst, citing Sun Microsystems’ then-CEO Scott McNealy who famously declared in 1999: “You have zero privacy. Get over it.” Just last year, Facebook CEO Mark Zuckerberg proclaimed a similar stance, noting that “age of privacy is over“.

US seeks veto powers over new domain names

The Obama administration is quietly seeking the power for it and other governments to veto future top-level domain names, a move that raises questions about free expression, national sovereignty, and the role of states in shaping the future of the Internet.

At stake is who will have authority over the next wave of suffixes to supplement the venerable .com, .org, and .net. At least 115 proposals are expected this year, including .car, .health, .nyc, .movie, and .web, and the application process could be finalized at a meeting in San Francisco next month.

Some are likely to prove contentious among more conservative nations. Two different groups–the dotGAY Initiative and the .GAY Alliance–already have announced they will apply for the right to operate the .gay domain; additional controversial proposals may surface in the next few months. And nobody has forgotten the furor over .xxx, which has been in limbo for seven years after receiving an emphatic thumbs-down from the Bush administration.

When asked whether it supports or opposes the creation of .gay and .xxx, an official at the U.S. Commerce Department replied that “it is premature for us to comment on those domain names.” The Internet Corporation for Assigned Names and Numbers (ICANN), a nonprofit based in Marina del Rey, Calif., that has a contract with the U.S. government to manage Internet addresses, is overseeing the process of adding new domain suffixes.

A statement sent to ZDNet Asia’s sister site CNET over the weekend from the Commerce Department’s National Telecommunications and Information Administration, or NTIA, said its proposed veto procedure “has merit as it diminishes the potential for blocking of top level domain strings considered objectionable by governments. This type of blocking harms the architecture of the DNS and undermines the goal of universal resolvability (i.e., a single global Internet that facilitates the free flow of goods and services and freedom of expression).”

Another way of phrasing this argument, perhaps, is: If less liberal governments adopt technical measures to prevent their citizens from connecting to .gay and .xxx Web sites, and dozens of nations surely will, that will lead to a more fragmented Internet.

In addition, giving governments more influence inside ICANN may reduce the odds of an international revolt that would vest more Internet authority with the not-exactly-business-friendly United Nations. Last year, China and its allies objected to the fact that “unilateral control of critical Internet resources” had been given to ICANN and suggested that the U.N. would be a better fit.

Submitting an application to create and operate a new domain suffix is expected to cost US$185,000, ICANN says.

The Obama administration is proposing (PDF) that domain approval procedures be changed to include a mandatory “review” by an ICANN advisory panel comprised of representatives of roughly 100 nations. The process is open-ended, saying that any government “may raise an objection to a proposed (suffix) for any reason.” Unless at least one other nation disagrees, the proposed new domain name “shall” be rejected.

This would create an explicit governmental veto over new top-level domains. Under the procedures previously used in the creation of .biz, .name, and .info, among others, governments could offer advice, but the members of the ICANN board had the final decision.

“It’s the U.S. government that’s proposing this procedure, and they’ve shown absolutely no interest in standing up for free expression rights through this entire process,” says Milton Mueller, a professor of information studies at Syracuse University and author of a recently-published book on Internet governance. Mueller, who said he expects some Middle Eastern countries to object to .gay, says the Obama administration is “completely disregarding” earlier compromises.

According to the latest version of ICANN’s proposed procedure, anyone may file objections to a proposed domain suffix on grounds that it may violate “norms of morality and public order,” although there’s no guarantee that a suffix would be rejected as a result. Two ICANN spokesmen did not respond to multiple requests for comment.

“NTIA will continue to provide advice on how ICANN can promote competition in the domain name marketplace while ensuring Internet security and stability,” NTIA said in a statement. “NTIA continues to support a multi-stakeholder approach to the coordination of the domain name system to ensure the long-term viability of the Internet as a force for innovation and economic growth.”

The U.S. proposal will be incorporated into what’s being called a “scorecard” that governments are drafting to summarize their concerns with the current process of approving new domain suffixes. The scorecard is expected to be published in two weeks.

Then, at the end of this month, ICANN will hold a two-day meeting in Brussels with representatives of national governments to try to reach a compromise on how to share authority over new domain suffixes. (The language of the official announcement says the purpose is to “arrive at an agreed upon resolution of those differences.”) ICANN’s next public meeting begins March 13 in San Francisco.

A seven-page statement (PDF) in December 2010 from the national governments participating in the ICANN process says they are “very concerned” that “public policy issues raised remain unresolved”. In addition to concern over the review of “sensitive” top-level domains, the statement says, there are also issues about “use and protection of geographical names.” (For instance, should a U.S.-based entrepreneur be able to register .london or .paris, or should those be under governmental control?)

That statement followed years of escalating tensions between ICANN and representatives of national governments, including a 2007 statement stressing the importance of “national sovereignty.” A letter (PDF) sent to ICANN in August 2010 suggested that “the absence of any controversial (suffixes) in the current universe of top-level domains to date contributes directly to the security and stability of the domain name and addressing system.” And the German government recently told (PDF) ICANN CEO Rod Beckstrom that there are “outstanding issues”–involving protecting trademark holders–that must be resolved before introducing “new top-level domains.”

Steve DelBianco, the executive director of the NetChoice coalition, says that the Obama administration’s proposed veto “is not surprising.” Governmental representatives “were not happy about .xxx getting through,” he says. “They want a better mechanism in the future.” NetChoice’s members include AOL, eBay, Oracle, VeriSign, and Yahoo.

“They’re looking at the rear view mirror at .xxx and looking through the windshield at several hundred new” top-level domain names, DelBianco says. “They want a mechanism that if (they) have concerns, they could stop an objectionable domain.”

Microsoft responds to Google’s copycat claims, again

Following last week’s fracas over whether Microsoft was culling search results from rival Google, Yusuf Mehdi, Microsoft’s senior vice president of its Online Services Division, has weighed in, reiterating that Google’s claims are false.

“We do not copy results from any of our competitors. Period. Full stop,” Mehdi said in a post on Bing’s community blog titled “Setting the record straight”.

“We have some of the best minds in the world at work on search quality and relevance, and for a competitor to accuse any one of these people of such activity is just insulting,” Mehdi said.

Mehdi went on to mirror some of the statements made by Harry Shum, Microsoft’s head of core search development, during the company’s Farsight event. Shum had discussed allegations on stage with Google’s head of Web spam, Matt Cutts; Mehdi outlined how Bing made use of anonymous click stream data, along with “more than a thousand inputs” to create Bing’s ranking algorithm.

Mehdi said that Google’s plan to check whether Bing was looking at that click stream data was “rigged to manipulate Bing search results”, and called Google’s honeypot attack “click fraud”. He then compared Google’s efforts to the the methods used by spammers to create fraudulent search result pages.

“What does all this cloak and dagger click fraud prove? Nothing anyone in the industry doesn’t already know,” Mehdi said. “As we have said before and again in this post, we use click stream optionally provided by consumers in an anonymous fashion as one of 1,000 signals to try and determine whether a site might make sense to be in our index.”

Mehdi closed up the post by saying that the company would continue to focus on innovating the product, though added a jab about the timing of Google’s honeypot discovery, saying it was directly related to some of Microsoft’s recent improvements to Bing, which were “so big and noticeable that we are told Google took notice and began to worry,” Mehdi said.

Lawmakers ruffling Facebook feathers again

Two members of the U.S. House of Representatives are putting the pressure on Facebook to say more about its plans to share more user information with third parties.

Last week, U.S. Reps. Ed Markey (D-Mass.) and Joe Barton (R-Texas) published a joint letter to Facebook CEO Mark Zuckerberg in which they request “information about Facebook’s recently announced, and subsequently postponed plan to make its users’ addresses and mobile phone numbers available to third-party Web sites and application developers”,

Facebook announced last month on its developer blog that it would be delaying but eventually continuing with plans to let users share their addresses and cell phone numbers after the initial announcement of the feature led to some criticism. The company insists it’s a positive development.

“With this change, you could, for example, easily share your address and mobile phone with a shopping site to streamline the checkout process, or sign up for up-to-the-minute alerts on special deals directly to your mobile phone,” the post by Facebook’s Douglas Purdy explained, in a clear hint to the social network’s plans to make moves in the e-commerce world. “As with the other information you share through our permissions process, you need to explicitly choose to share this data before any application or Web site can access it, and you can not share your friends’ address or mobile number with applications.”

But Markey and Barton say they want more answers, including on why Facebook chose to suspend the rollout in the first place, whether third parties will be forced to delete the address and phone number data of users who share it and then decide to stop, and whether it considered the possible risks to children and teenagers who will have access to this new option on Facebook.

“As an innovative company that is responsive to its users, we believe there is tremendous value in giving people the freedom and control to take information they put on Facebook with them to other Web sites,” read a statement from Facebook in response to the letter. “We enable people to share this information only after they explicitly authorize individual applications to access it. This system of user permissions was designed in collaboration with a number of privacy experts. Following the rollout of this new feature, we heard some feedback and agree that there may be additional improvements we could make. Great people at the company are working on that and we look forward to sharing their progress soon.”

Facebook’s privacy policies have riled D.C. lawmakers several times over the past few years, like last year when Sen. Charles Schumer (D-N.Y.) petitioned to the Federal Trade Commission to investigate how Facebook handles user data.

This article was first published as a blog post on CNET News.

Google claims Bing copies its search results

After noticing curious search results at Bing, then running a sting operation to investigate further, Google has concluded that Microsoft is copying Google search results into its own search engine.

That’s the report from Search Engine Land’s Danny Sullivan on Tuesday, who talked to both companies about it and presented Google’s evidence. According to the report, a mechanism could be the Suggested Sites feature of Internet Explorer and the Bing Toolbar for browsers, both of which can gather data about what links people click when running searches.

The story began with Google’s team for correcting typographical errors in search terms, which monitors its own and rivals’ performance closely. Typos that Google could correct would lead to search results based on the correction, but the team noticed Bing would also lead to those search results without saying it had corrected the typo.

Next came the sting, setting up a “honeypot” to catch the operation in action. Google created a “one-time code that would allow it to manually rank a page for a certain term”, then wired those results for particular, highly obscure search terms such as “hiybbprqag” and “ndoswiftjobinproduction”, Sullivan said. With the hand coding, typing those search terms would produce recognizable Web pages in Google results that wouldn’t show in search results otherwise.

Next, Google had employees type in those search terms from home using Internet Explorer with both Suggested Sites and the Bing Toolbar enabled, clicking the top results as they went. Before the experiment, neither Bing or Google returned the hand-coded results, but two weeks later, Bing showed the Google results that had been hand-coded.

Microsoft didn’t say today whether it plans to continue the practice, but evidently it doesn’t consider it “cheating”, as Google does.

In a comment to ZDNet Asia’s sister site ZDNet blogger Mary Jo Foley, Microsoft said, flatly: “We do not copy Google’s results.” However, that denial turns out to be more a matter of interpretation.

A blog post by Harry Shum, Microsoft’s corporate vice president of Bing, offered some detail on what Microsoft did. He acknowledged monitoring what links users clicked, but essentially described it as letting humans help gather data through crowdsourcing.

We use over 1,000 different signals and features in our ranking algorithm. A small piece of that is clickstream data we get from some of our customers, who opt-in to sharing anonymous data as they navigate the web in order to help us improve the experience for all users.

To be clear, we learn from all of our customers. What we saw in today’s story was a spy-novelesque stunt to generate extreme outliers in tail query [rare search query] ranking. It was a creative tactic by a competitor, and we’ll take it as a back-handed compliment. But it doesn’t accurately portray how we use opt-in customer data as one of many inputs to help improve our user experience.

The history of the web and the improvement of a broad array of consumer and business experiences is actually the story of collective intelligence, from sharing HTML documents to hypertext links to click data and beyond. Many companies across the Internet use this collective intelligence to make their products better every day.

Google made it clear it’s isn’t happy about it.

“I’ve got no problem with a competitor developing an innovative algorithm. But copying is not innovation, in my book,” Sullivan quotes Google Fellow and search expert Amit Singhal as saying. “It’s cheating to me because we work incredibly hard and have done so for years but they just get there based on our hard work…Another analogy is that it’s like running a marathon and carrying someone else on your back, who jumps off just before the finish line.”

And in a statement to ZDNet Asia’s sister site CNET News, Singhal added that Google disagrees with Microsoft’s position, speaking just as flatly as Microsoft denying copying:

Our testing has concluded that Bing is copying Google Web search results.

At Google we strongly believe in innovation and are proud of our search quality. We look forward to competing with genuinely new search algorithms out there, from Bing and others–algorithms built on core innovation and not on recycled search results copied from a competitor.

Google didn’t respond to CNET questions about whether it plans any actions beyond publicizing the honeypot.

Google brought its concerns to Sullivan shortly before a Bing search event today. Coincidentally or not, Google just shifted that event’s agenda significantly. Indeed, the search-copying issue become the focus of a debate between Microsoft and Google representatives at the conference.

Stefan Weitz, director of Microsoft’s Bing search engine, shared this response with Sullivan: “Opt-in programs like the [Bing] toolbar help us with clickstream data [information that shows Microsoft what links people click on], one of many input signals we and other search engines use to help rank sites. This ‘Google experiment’ seems like a hack to confuse and manipulate some of these signals.”

Hack, experiment, or honeypot, it’s very revealing. Google created about 100 such hand-coded results, Sullivan said, so it’s hard to imagine the act distorting search results in any significant way. The next relevancy question will be to see whether Microsoft concludes it’s time to update its own search algorithm so that a Bing search for “hiybbprqag” won’t lead to ticket information for the Wiltern theater anymore.

IPv4 rationing kicks into high gear in Asia

The Asia-Pacific Network Information Center (APNIC) has received two large blocks of IPv4 addresses with another promised to it, giving businesses in the region that are in the midst of their IPv6 migration somewhat of a breather. However, this is mere respite and the agency is urging the Internet community to heed its migration call.

According to an IDG News Service report, the Internet Assigned Numbers Authority (IANA) had given the last two on-demand lots of addresses to APNIC on Monday because of its rapid rate of handing out Web addresses.

This, in turn, leaves the IANA with its remaining last five blocks of addresses, thus activating a rule that compels the IANA to allocate one block to each of the Regional Internet Registries (RIR)–one of which is APNIC–within the next few days, the report noted.

The RIRs will then funnel these addresses down to Internet service providers (ISPs) and companies within their respective regions. Each block contains 16 million addresses, IDG stated.

Migration gripes
Predictions and warnings about the dwindling of IPv4 addresses have been going on for years, but the challenges of transitioning to IPv6 have held back progress, commented an analyst.

Ovum’s senior consultant Craig Skinner said that one reason for IPv6’s lack of popularity is because there’s a lack of compatibility between the two protocols. He pointed out that IPv6 had been deployed as early as 1999, but its lack of ubiquity today is down to its not being able to support its predecessor.

“This makes the transition difficult and it will therefore be necessary to simultaneously maintain IPv4 and IPv6 for many years and to provide solutions for interworking during the transition period,” he noted.

Additionally, an earlier report revealed that countries and organizations are less than keen about migrating to the new protocol due to the high costs involved and its lack of vision for future Internet development.

“IPv6 addresses were designed as the solution to the predicted shortage of IPv4 addresses, but as an industry, it has been easier to extend usage of IPv4 rather than undergo the challenge of transitioning to IPv6,” Skinner surmised.

Technologies such as dynamic host configuration protocol (DHCP) and network address translators (NATs), which allow for sharing of public IP addresses within a pool of users, have helped in prolonging IPv4’s longevity, the analyst added.

There are limitations to these technologies, though.

NATs, for instance, can break end-to-end communications principle of the Internet, causing complications for developers, particularly those working in VoIP (voice over Internet Protocol), video conferencing and P2P (peer-to-peer) arenas, Ovum stated.

As such, migration to IPv6, painful though it might be, is still the most viable option to maintain the Internet’s well-being, APNIC stated.

The agency said: “IPv6 is the only means available for the sustained ongoing growth of the Internet, and we urge all members of the Internet industry to move quickly towards its deployment.

Facebook’s next big media move: Comments

Facebook is planning to launch a third-party commenting system in a matter of weeks, according to multiple sources familiar with the new product. This new technology could see Facebook as the engine behind the comments system on many high-profile blogs and other digital publications very soon.

The company is actively seeking major media companies and blogs to partner with it for its launch, part of a bigger media industry move spearheaded in part by the recent hires of Nick Grudin and Andy Mitchell, media business development executives with respective track records at Newsweek and The Daily Beast.

Representatives from Facebook were not immediately available for comment.

Facebook, of course, is already very present in blog comments. Currently, a digital publishing outlet–say, a blog or a newspaper’s Web site–can integrate Facebook’s developer API and allow users to “connect” to their Facebook accounts, or can build in “Social Comments” in a widget of related messages. Often, users can post alerts on their Facebook walls announcing that they’ve commented, or can have a “Social Comment” turned into a status message. The new commenting product is a significantly deeper expansion of this, according to sources. Facebook will be able to power the entire commenting system–handling the log-in and publishing, cross-promoting comments on individuals’ Facebook walls, and possibly even promoting them as well on media outlets’ own “fan” pages. Undoubtedly, the Facebook “like” button will be deeply integrated as well.

ZDNet Asia sister site CNET has not seen mockups, but it’s conceivable that the whole thing could look quite a bit like TimesPeople, a commenting and social news system that The New York Times launched several years ago for its own publication.

One source hinted that the Facebook commenting product may also permit users to log in with Google, Yahoo, or Twitter IDs if a publisher chooses to incorporate them. That’s a surprising move considering Facebook’s curious relationship with the developer arms of both Google and Twitter–Facebook blocked a Google data-portability product called Friend Connect several years ago, and last summer it blocked a Twitter friend-finder that trawled Facebook contact lists.

It’s also not clear how–if at all–Facebook commenting will deal with the tension between Facebook’s insistence that members use their real identities, and the fact that much of the commenting that takes place on blogs and other media outlets is still done behind a veil of anonymity.

Whatever the specifics are, this new comments product could have serious reverberations in the start-up community. One source who has seen the new Facebook commenting technology remarked that it’s an obvious and direct competitor to start-ups that provide commenting technology, like Disqus and Echo. With Facebook Places adopting much of the “check-in” methodology that smaller competitors Foursquare and Loopt offer, and Facebook Questions operating in the same space as Quora (though Facebook has insisted it’s not trying to “kill” it), the social network has shown that it’s very willing to move into spaces dominated by start-ups and instantly give them a huge new competitor.

But considering the frequency with which Facebook launches new features, it’s inevitable. In the past six months, Facebook has launched the Places geolocation service, a revamped Facebook Messages, and new upgrades to Facebook Photos and Groups–to name a few.

Peter Kafka of AllThingsD points out that celebrity news magazine People’s Web site has been relying exclusively on Facebook for its commenting technology for a few months now. Some other sites–though few major publishers–have as well, through the Comments Box code.

It’s likely that the broader commenting product will look similar, but significantly more enhanced: TechCrunch notes that comment voting like in Facebook Questions will likely be part of it, as will some kind of ranking system for individual users to gauge their activity levels.

For Google’s AdWords, relevance takes time

Dodge’s Challenger is a modern muscle car. The Challenger explosion 25 years ago was a tragic moment. Other than the name they don’t have much in common, but for several hours Friday morning, Google’s AdWords system considered them linked.

That’s just one example of a weak spot in Google’s famous AdWords system, which turned an interesting Stanford science project into the world’s most powerful Internet company. Simply put, it takes some time for the AdWords system to determine whether an ad triggered by a search query is truly relevant to that query, meaning that in times of breaking news or a sudden spike for certain queries Google often serves completely irrelevant ads, such as the one promoting the Challenger’s Hemi engine above news stories about the 25th anniversary of the Challenger space shuttle disaster.

A week of study of Google’s “hot searches” as measured by Google Trends–a compilation of search terms whose query volume is disproportionately rising at a given hour compared to their usual frequency–provided numerous examples of how AdWords can require at least several hours to obtain enough feedback to properly rank ads.

Breaking news stories about the death of fitness guru Jack LaLanne triggered an ad for The Cord Bug, an accessory for car owners in cold climates that need to keep their engines warm overnight, in the most prominent slot. After a five-foot long monitor lizard was discovered wandering around a Southern California condo complex and showcased on morning news shows Wednesday, Google News served computer-monitor ads for several hours alongside search results.

This is probably not an issue on incoming CEO Larry Page’s immediate to-do list, as Google continues to make quite a bit of money from relevant ads on the majority of searches. But it does speak to the thorny problem of determining relevancy in real time: it’s not just a search problem, it’s an ad problem too.

Your Quality Score is important to us
Ad rankings on Google for search keywords are determined by two main factors: the maximum cost an advertiser is willing to pay per click, and an ad’s “quality score”, which is a measure of how relevant the ad’s copy is to the desired keyword, among other things. Even if an advertiser is willing to spend a lot of money per click, if their ad scores poorly on quality, it will likely appear below ads from advertisers that weren’t willing to pay as much but scored higher on quality.

However, it takes time for Google to determine the quality score for a new ad. It needs to measure how often users are clicking on the ad as compared to other ads, as well as whether users are staying on the landing page behind the ad as opposed to returning immediately to Google.

How long does this take? Google won’t say, but it’s at least several hours in many cases.

Google would only offer a statement on the issue. “Google’s advertising system determines the quality of an ad based on how users are responding to that ad. This process can take a brief amount of time, especially if it’s a fast-rising query that is newly popular,” it said. Your definition of “brief,” of course, may vary.

The gap is important for a few reasons. First of all, Google’s top priority is to serve relevant content to its users, and it has long considered ads to be useful content so long as they are relevant to one’s query.

Also, the gap allows advertisers to piggyback on search queries in Google Trends much the same way news organizations latch onto those reports in hopes of directing some of that search spike their way. An advertiser could get a decent amount of traffic relatively cheaply if they are quick to jump on a trending keyword that not many other people have purchased, taking what they can get before the quality score calculations take place and kick them off the page.

One prominent advertiser on trending topics in Google throughout the whole week was Ask.com, which confirmed that as part of its ongoing traffic-acquisition strategy it frequently purchases Google ads linked to trending search terms that direct clicks back to Ask.com’s pages on that topic. (Yahoo and Microsoft’s Bing employ similar strategies.) Those ads actually fare well in the quality score calculation since it’s clear what type of content Ask.com is advertising, but it’s not hard for others with less-relevant content to employ the same fast-mover strategy and settle for the second, third, or fourth spot on the search-results page until the calculations take effect.

AdWords showing its age?
But perhaps more troubling for Google is the notion that the system that generates an amazing amount of cash is a bit too creaky for a Web that publishes content at a speed which Google never could have anticipated 10 years ago when the system was first designed.

Expectations of how content should be delivered on the Internet are changing as news publishers and consumers focus on speed: just look at the demand for information following reports that Michael Jackson had died in the summer of 2009. There is an opportunity to serve relevant ads alongside that content in Google News or Realtime search that the company is simply missing because of the delay in determining relevancy.

That doesn’t bode well for its chances of using the current incarnation of AdWords to monetize real-time content on Google. Irrelevant ads aren’t good for anyone in Google’s system: users don’t want to see ads perceived as spam, advertisers want to target likely buyers, and Google won’t make money from ads that receive few or no clicks. That’s not to mention any institutional embarrassment from missing the mark when it comes to relevancy.

As Web usage shifts more and more toward the real-time consumption of content, Google will need to develop a strong system for ranking both the relevancy of the content as well as the ads. Somewhere, someone is working on this extremely difficult computer science problem. If they’re not inside of Google already, the company might want to find them.

Google launches algorithm change to tackle content copying

In a previous blog post, Matt Cutts, head of Google’s Webspam team, wrote about the progress the team has made in reducing the amount of spam in search engine results. In that post, he hinted at some changes in the works to push spam levels lower, including one that affects sites that copy content from other sites, as well as those that have low levels of original content.

Clearly, there’s a blurry line there – or a “slippery slope“, as ZDNet’s Larry Dignan referred to it in his own post that waved some red flags over how the quality of a site would be judged.

Last week, Cutts posted an update to last week’s post on his own blog, announcing that that one specific change to the algorithm was approved at the team’s weekly meeting and that it was launched earlier this week. In his post, Cutts explains:

This was a pretty targeted launch: slightly over 2 percent of queries change in some way, but less than half a percent of search results change enough that someone might really notice. The net effect is that searchers are more likely to see the sites that wrote the original content rather than a site that scraped or copied the original site’s content.

Read more of “Google launches algorithm change to tackle content copying. Will it help?” at ZDNet.

LinkedIn files for IPO

LinkedIn has formally announced its plans to go public through the filing of an S-1 form with the U.S. Securities and Exchange Commission Friday, making it the first time that the business networking site has turned over many of the detailed facts about its financial operations.

“We believe we are transforming the way people work by connecting talent with opportunity at massive scale,” LinkedIn explained in its filing. “Our goal is to provide a global platform capable of mapping every professional’s experience, skills, and other relevant professional data to his or her professional graph, including connections with colleagues and business contacts.”

Through a combination of advertising and business services, LinkedIn has managed to actually make some money in the process. Net revenue in the first nine months of 2010 was US$161 million, with a profit of US$10 million; in the same period in the previous year, it logged half that revenue and only US$3.4 million in profit.

LinkedIn has more than 90 million registered members, up from 55 million a year before–a statistic that it’s been more vocal about as a private company. But in the S-1 filing, the company warned that not all of its registered users are active and that a minority of members are responsible for the “substantial” majority of its 5.5 billion page views.

This article was first published as a blog post on CNET News.

M’sian govt under fire for online media controls

PETALING JAYA–The Malaysian government’s move to introduce policies that will provide more control over online content has come under fire from opposition politicians and industry watchers.

According to a report this week by local news agency Bernama, the Home Ministry was reviewing the definition of the word “publication” in the country’s Printing Presses and Publications Act (PPPA) 1984 to decide if it should now include Internet content, blogs and social networks such as Facebook. The ministry noted that the landscape today is different with the intrusion of digital technology.

The PPPA governs publishing and the use of printing presses in Malaysia. Under the Act, all printing presses require a licence that must be renewed yearly and renewed based on the approval of the Home Ministry.

Net censorship outlawed

Malaysia’s laws, detailing that the Internet cannot be censored, are provisioned under the Multimedia Super Corridor (MSC)’s Bill of Guarantees as well as Article 3(3) of the Communications and Multimedia Act 1998.
These policies were established in 1996 as part of former premier Mahathir Mohamad’s efforts to liberalize Malaysia into an infocomm and multimedia powerhouse through projects such as the Multimedia Super Corridor.
The government has largely kept its promise not to enforce Internet censorship, with some glitches in the past including its attempt to block Netizens from accessing Malaysia Today, the Web site of prominent blogger, Raja Petra Kamaruddin.

Quoting the ministry’s secretary-general Mahmood Adam, the report said: “We hope the amendments will be tabled in Parliament by March this year because we need to overcome weaknesses, especially those involving multimedia content.”

The announcement, however, has received condemnation from the online community including social networks Twitter and Facebook, as well as politicians and industry watchdogs.

Lim Kit Siang, parliamentary leader of opposition Democratic Action Party, described the move as the government’s latest attempt to quell online dissent and a clear violation of its promise not to enforce censorship on the Internet.

“They should be aware of this violation and if they proceed with this, they will frighten away investors,” Lim told online news portal, The Malaysian Insider (TMI). “If the guarantee is not honoured, investors will view Malaysia as losing its credibility.”

Step back to “stone age”
Edmund Bon, the Malaysian Bar Council’s constitutional law committee chief, also pointed to the Bill of Guarantees (BoG) which contains the government’s pledge not to censor the Internet. He told ZDNet Asia that any attempt to regulate online content is violation of this.

“The PPPA amendment is taking us further away as a civil society and closer to a police state,” Bon told ZDNet Asia. “We can never become a developed nation with such laws… We are going back to the stone age.”

Nik Nazmi, communications director of National Justice Party (PKR), said the government’s attempt to extend the scope of the controversial law, as a way to demonstrate its commitment to reform civil liberties, is “merely superficial”.

“PKR calls for the government to scrap this misguided plan and work toward amending the PPPA instead to show they are truly serious about change,” Nik said in a statement.

The National Union of Journalists (NUJ) also described the latest move as a backward attempt to block the spread of information to the public.

“The NUJ is worried and disappointed with the Home Ministry’s plans to amend the PPPA in order to control media freedom in the country,” NUJ President Hata Watahari said in a statement. “In view of this, NUJ wants the Home Ministry to immediately stop all efforts to amend the PPPA.”

Khairy Jamaluddin, a Member of Parliament for the ruling Barisan Nasional coalition, also voiced his concerns over the move to amend the Act. Noting in his blog that there were 8.5 million Facebook users in Malaysia, 84 percent of whom were aged 35 years and below, Khairy said the proposed move by the government would not only offend the younger generation, social networking users would not be able to accept any attempt to shackle a platform they were now so familiar with.

Amendments still under discussion
In a bid to quell the rising dissent, Home Minister Hishammuddin Hussein said in the local press Wednesday that the proposed PPPA amendments have yet to be finalized and discussions are still in the early stage.

The minister said no decisions have been made, adding that his secretary-general was simply giving his views on the issue.

Hishammuddin said any objection on the proposal would be premature since the actual amendments of the Act have not been determined. “These may be relaxed and loosened, or they may not even be [proposed] to the committee,” he said.

Edwin Yapp is a freelance IT writer based in Malaysia.

Facebook: Egypt hasn’t blocked us yet

Social media has ingrained itself so thoroughly as an instrument of activist organization that it is targeted by many an authoritarian government seeking to quell an uprising.

This week, as protests descended upon the Egyptian capital of Cairo, Twitter confirmed that it had been blocked in the North African country. On Wednesday, rumors began to spread that Egypt was trying to block Facebook as well–especially since it appears that a Facebook “event” page had been how many of the protesters found out about the gathering.

But the social networking site claims it has not been blocked.

“We are aware of reports of disruption to service but have not seen any major changes in traffic from Egypt,” Facebook spokesman Andrew Noyes told ZDNet Asia’s sister site CNET via e-mail. “You may want to visit Herdict.org, a project of the Berkman Center for Internet & Society at Harvard University that offers insight into what users around the world are experiencing in terms of Web accessibility.” Herdict.org was also recommended by Twitter as a destination for users seeking answers during several hours on Tuesday in which the company itself declined comment.

That doesn’t mean Egyptian authorities aren’t trying. In CNET’s coverage of Tuesday’s news that Twitter had reportedly been blocked, Mark Belinsky, co-director of the nonprofit Digital Democracy, explained that sometimes governments will not block a site altogether to crack down on activist opposition but may make its servers extremely difficult to access by slowing the connection down.

Google extends SEA presence to Malaysia

KUALA LUMPUR–Google has reaffirmed its commitment to further invest in Southeast Asia with the opening of a new office in Malaysia’s city center.

According to Julian Persaud, managing director for Google Southeast Asia, the new office here is the second in this part of the region, almost four years after establishing its first in Singapore.

“We are increasing our investments here in the region and the opening of the KL office will focus on what we can bring to local users, advertisers and our partners,” Persaud said during a media conference here Wednesday.

Asked how much the Internet giant planned to invest in Malaysia, he declined to provide details, noting only that its presence here signals the company’s intent to “further invest” in the country.

Malaysia was chosen because of its highly-skilled workforce, multicultural diversity and business-friendly environment, he said, noting that Google had previously worked on several local projects including a collaboration with the Ministry of International Trade and Industry.

“This step forward is inevitable and our presence here is a logical conclusion to our involvement with Malaysia,” Persaud said.

Eyeing consumers, SMBs
According to Sajith Sivanandan, Google Malaysia’s newly appointed country manager, the company’s local outlet will focus on the consumer and small and midsize business (SMB) market segments.

“For our users, we want to continue on the path of localization including bringing in features such as Google’s StreetView to Malaysia. From the business perspective, we want to help local enterprises extend their businesses through our products and services,” explained Sivanandan, who previously led the company’s Southeast Asian online advertising business for the travel sector.

Noting that the cost of advertising and promotions is still too high for many SMBs, he said Google hopes to reduce the barrier of entry for smaller enterprises and provide the platform that will enable local SMBs to reach out to customers worldwide.

The search company’s local office is located in the city center and it is looking to recruit new hires, Sivanandan said. “We are looking for smart analytical people and those who are interested to join a startup environment.

“The roles we’re looking at encompass client servicing, sales and marketing as well as roles in public relations and corporate communications,” he added.

Edwin Yapp is a freelance IT writer based in Malaysia.

W3C narrows ‘HTML5’ logo meaning to HTML5

The World Wide Web Consortium, faced with derision that its new HTML5 logo represented a broader set of Web technologies, has pared down the logo’s scope.

“Since the main logo was intended to represent HTML5, the cornerstone of modern Web applications, I have updated the FAQ to state this more clearly. I trust that the updated language better aligns with community expectations,” W3C spokesman Ian Jacobs said last week in a blog post.

Indeed, the HTML5 logo FAQ now states in no uncertain terms: “This logo represents HTML5, the cornerstone for modern Web applications.” Those who want to promote related technologies–Cascading Style Sheets (CSS), Web Open Font Format (WOFF), Scalable Vector Graphics (SVG), and Web Sockets, for example–can use the accompanying but subordinate icons.

HTML5 has become something of a marketing buzzword–to some Web developers’ chagrin, since it sometimes stands for so much more than the next version of Hypertext Markup Language that’s being standardized at the W3C and at its longtime home, the Web Hypertext Applications Technology Working Group (WHATWG).

Indeed, after the W3C released the HTML5 logo last week, Ian Hickson, editor of the specification at both W3C and WHATWG, moved up the schedule to drop use of HTML5 in favor of just HTML. That’s only happening at the WHATWG, though he’d like to see it at the W3C as well.

This article was first published as a blog post on CNET News.

Google’s Schmidt gets $100 million stock award

Google’s Eric Schmidt, who, the company said Thursday, will move from CEO to executive chairman in the spring, has received an award of US$100 million in stock and stock options, according to a report.

Following on an initial report from Bloomberg, and citing compensation specialists and data, The Wall Street Journal called the award “highly unusual” for a sitting CEO, adding that equity awards like this are usually given to new chief executives.

The Journal said Schmidt’s award, which vests over four years, is the largest in grant-dollars for a sitting CEO since Sanjay Jha, Motorola’s then co-chief, received a US$103 million grant in 2008.

Schmidt has been CEO of Google for the past decade, as the company has become known worldwide for its impressive monetary value and its various tech- and Internet-related initiatives.

Last Thursday, the same day it announced that Schmidt would hand the CEO reins to Google co-founder Larry Page, Google reported fourth-quarter revenue of US$6.37 billion (minus traffic acquisition costs)–ahead of analyst estimates. Net income for the quarter was US$2.54 billion, or US$2.85 billion, excluding onetime charges, and earnings per share were US$8.75, beating analyst estimates of US$8.09 excluding charges.

This article was first published as a blog post on CNET News.

Mozilla offers do-not-track tool to thwart ads

Mozilla, acting on a U.S. Federal Trade Commission proposal, has offered a detailed mechanism by which Firefox and other Web browsers could prevent Web pages from tracking people’s online behavior for advertising purposes.

With Mozilla’s do-not-track technology, network data packets from the browser would signal to a Web site that a person doesn’t wished to be tracked. Then comes the tricky part: getting Web site operators to cooperate.

Alex Fowler, Mozilla’s global privacy and public policy leader, said that with the mechanism, the browser would alert a Web site during basic communications that use the Web’s Hypertext Transfer Protocol (HTTP). He also acknowledged that getting Web sites to cooperate is a crucial difficulty in getting the system to work:

As the first of many steps, we are proposing a feature that allows users to set a browser preference that will broadcast their desire to opt-out of third party, advertising-based tracking by transmitting a Do Not Track HTTP header with every click or page view in Firefox. When the feature is enabled and users turn it on, Web sites will be told by Firefox that a user would like to opt-out of OBA [online behavioral advertising]. We believe the header-based approach has the potential to be better for the Web in the long run because it is a clearer and more universal opt-out mechanism than cookies or blacklists…

The advantages to the header technique are that it is less complex and simple to locate and use, it is more persistent than cookie-based solutions, and it doesn’t rely on user’s finding and loading lists of ad networks and advertisers to work…

The challenge with adding this to the header is that it requires both browsers and sites to implement it to be fully effective. Mozilla recognizes the chicken and egg problem and we are taking the step of proposing that this feature be considered for upcoming releases of Firefox.

Mozilla has long had privacy as part of its mission to empower users of the Internet. In practice, privacy remains a broad, thorny problem, however; what one person sees as corporate intrusiveness another can see as a way to offer genuinely relevant ads.

Mozilla doesn’t appear to be acting alone. “Google is expected to announced a privacy tool called ‘Keep My Opt-Outs’ that enables users to permanently opt out of ad-targeting from dozens of companies,” The Wall Street Journal reported Monday, citing an unnamed source.

The FTC proposed a Do Not Track mechanism last year (PDF).

“While some industry members have taken positive steps toward improving consumer control, there are several concerns about existing consumer choice mechanisms,” the FTC said. Among them, “industry efforts to implement choice on a widespread basis have fallen short”, consumers aren’t generally aware of the technology when it’s available, and it can be hard to use.

“Given these limitations, [FTC] staff supports a more uniform and comprehensive consumer choice mechanism for online behavioral advertising, sometimes referred to as ‘Do Not Track,’ the report said. “Such a universal mechanism could be accomplished by legislation or potentially through robust, enforceable self-regulation.”

This article was first published as a blog post on CNET News.

Google ready for action against content farms

Google is ready to fire a shot across the bow of the so-called content farms, willing to acknowledge recent criticism of the quality of its search results but still not quite ready to detail specific remedies.

The company announced last week that it has heard the complaints over the past several months regarding the quality of Google search, without question the most important component of Google’s public image. While no hard details were provided in an interview prior to the announcement, Google’s Matt Cutts, principal engineer and lead voice on search-quality issues, told ZDNet Asia’s sister site, CNET that the company will employ crowd-sourced feedback and other metrics in hopes of penalizing content scrapers and obviously low-content sites within its index.

“Today, English-language spam in Google’s results is less than half what it was five years ago, and spam in most other languages is even lower than in English,” Cutts said in a blog post last week. “However, we have seen a slight uptick of spam in recent months, and while we’ve already made progress, we have new efforts underway to continue to improve our search quality.”

Google has been thinking for quite some time about how to deal with content that isn’t obvious spam but is clearly not designed with the best interests of the user in mind, Cutts said. “Google needs to be open to ways where we can improve.”

As with anything pertaining to ranking Google search results, the stakes are high. In just one example, Demand Media is set to pursue an initial public offering next week expected to price the company at around US$1.3 billion: a company based almost entirely on the prospect of creating content geared to rank highly in Google. While Demand employees might disagree, it’s fair to say the quality of much of that content (“How to prepare a house as a rental property”) is questionable at best, as the company’s main interest is in pumping out high quantities of cheaply produced content for the Web.

Google is considering a number of options to deal with the rise of content farms, Cutts said. First off, it plans to change its famous search recipe to ding sites that are clear content “scrapers”, or those that copy content wholesale from other sites and repost it under their own domain, credit or not.

Quality, however, is a much more subjective matter. One thing Google plans to promote is an extension for its Chrome browser that allows users to label sites as spam, hoping that if it amasses enough data on sites that consistently put out low-quality content it will have more standing with the publishers of those sites to deflect complaints about ranking changes, Cutts said.

Otherwise, Google will try to find an algorithmic solution to the scourge of low-quality Web sites designed solely in hopes of ranking high within Google, Cutts said. Google would prefer that you conduct searches logged into its site with all the personalization options that are available, all the better to weed out spammy sites that offend individual users. However, not everyone wants to provide Google with that much information, and so the company is also working on ways to deal with search quality at a basic level.

Despite the obvious benefits toward improving the quality of any search engine’s results, there are clear landmines for Google in going down this road. The first content publisher dinged by Google’s new algorithmic recipe (Cutts refused to say when it would be implemented, citing a long-standing policy of not preannouncing specific changes) is likely to scream bloody murder about unfair treatment to anyone who is interested, and it’s fair to say Google competitors and government regulators are listening for such complaints.

In response, Google pointed to arguments against the nascent concept of “search neutrality,” which suggest that government intervention in search results could actually create a field day for spammers. Specifically, it chose to highlight the arguments of James Grimmelmann, a professor at New York Law School, who wrote an essay on search neutrality that concluded “A good search engine is more exquisitely sensitive to a user’s interests than any other communications technology“. (emphasis author’s)

Yet for Google, there’s little less important to the company’s well-being than the notion that people still find its services useful. And it sounds like it’s willing to risk whatever flak comes its way as a result of future ranking changes, although it’s worth noting that the details of how such a strategy might be implemented are quite sparse.

“Google really cares about our search quality,” Cutts said. “If we run into complaints on the Web, often we’ve already complained about it internally,” he said. Google reviews its search algorithms constantly, making about one change a day to the 200 or so signals that determine where a site ranks against a search query.

It’s fair to say that search quality will rank among the higher priorities for new Google CEO Larry Page over the next several months. Cutts acknowledged the importance of the issue to the company but noted that anything driven by humans is subject to flaws.

“We take pride in Google search and strive to make each and every search perfect,” Cutts wrote. “The fact is that we’re not perfect, and combined with users’ skyrocketing expectations of Google, these imperfections get magnified in perception. However, we can and should do better.”

This article was first published as a blog post on CNET News.

Mozilla blocks Skype’s Firefox-crashing add-on

Mozilla has barred a Skype extension for Firefox, accusing it of causing 40,000 browser crashes a week and of dramatically slowing page-load times.

“We believe that both of these items constitute a major, user-facing issue, and meet our established criteria for blocklisting an add-on,” Mozilla said in a blog post last Thursday. Because the extension is installed by default when Skype’s main software is installed, a “large number of Firefox users who have installed Skype have also installed the Skype Toolbar, knowingly or unknowingly”, Mozilla said.

Mozilla is in contact with Skype programmers and will restore the extension’s privileges if the problems are addressed, the organization said.

In a statement, Skype said it’s resolving the problem.

“Based on our initial investigation, we know that downloading the new client will fix for most users any compatibility issues, and are working with Mozilla to ensure that there are no other compatibility issues. We are sorry for any inconvenience this has caused our users,” the company said.

The Skype toolbar extension, bundled with the Skype software for making audio and video calls over the Internet, highlights phone numbers in Web pages to make it easier to call them with Skype. Those who really like it can still run the toolbar, Mozilla said: “The blocklist entry will be a ‘soft block’, where the extension is disabled and the user is notified of the block and given the option to re-enable it if they choose. It’s also important to note that the Skype application itself will continue to work as it always has; only the Skype Toolbar within Firefox is being disabled.”

The extension has been the No. 1 or No. 2 cause of crashes for the current stable version of Firefox, according to comments in Mozilla’s bug tracker. And the plug-in dramatically slows Firefox’s processing of Web page elements through what’s called the Document Object Model (DOM)–by a factor of 3 to 8 with a newer 5.x version and by a factor of 325 with the older 4.x version, Mozilla programmer Boris Zbarsky said. The effect of this is to make pages appear to load much more slowly.

Earlier in January, a Skype representative acknowledged that the company knows about the issue. “Look out for an update in the near future,” the representative said.

This article was first published as a blog post on CNET News.

Facebook coughs up information on Goldman deal

Facebook’s recent investment round, led by investment bank Goldman Sachs, has been one of the most-talked-about news events that the social network has gotten itself involved in–and arguably the one about which it’s been the most secretive.

Last Friday, Facebook broke the silence by issuing a press release in which it confirmed, finally, that it has raised US$1.5 billion (US$1 billion from Goldman Sachs and US$500 million in a round that also includes existing investor Digital Sky Technologies) at a US$50 billion valuation.

“DST and Goldman Sachs approached Facebook to express their interest in making an investment, and Facebook decided it was an attractive opportunity to bolster its cash reserves and increase its financial flexibility with limited dilution to existing shareholders,” the press release explained, adding that the US$1 billion came in the form of an oversubscribed offering to Goldman’s overseas clients. That confirms two reports: one, that demand for Facebook stock was unexpectedly high; and two, that U.S.-based Goldman clients could not participate in the offering because of a conflict with securities laws.

“There are no immediate plans for these funds,” the press release continued. “Facebook will continue investing to build and expand its operations.”

Even the issuing of a press release is rare for Facebook, which prefers to make announcements via posts on its company blog.

Facebook also acknowledged that because of this round, it expects to pass 500 individual shareholders this year and therefore will have to disclose its financials publicly–even if it remains a private company–by the end of April 2012.

This article was first published as a blog post on CNET News.

Google: 100 percent uptime ‘not attainable’

Google’s recent tweaks to its service level agreement (SLA) is not a promise to deliver 100 percent uptime but rather, an initiative to provide greater assurance for its customers, clarifies a company spokesperson.

In a blog post dated Jan. 17, Matthew Glotzbach, enterprise product management director for Google, announced that the Internet giant is improving its SLA for Google Apps by removing a previous clause that allows for scheduled downtime.

When contacted, a company spokesperson clarified that this means customers can expect no downtime of Google Apps services when Google is performing upgrading or maintenance work on its systems, but it is not claiming it can provide absolute service uptime.

“[Google doesn’t] believe that 100 percent uptime is attainable with commercial services. For comparison, even the landline telephone doesn’t reach 100 percent uptime”, she told ZDNet Asia in an e-mail.

She noted that there will be occasions of unforeseen and unexpected downtime, pointing to exclusions listed in the Google Apps SLA. For example, a service may experience downtime from factors constituting “Force Majeure“, such as natural disasters and acts of war or terrorism.

Previously, the Google Apps SLA would not recognize intermittent downtime of less than 10 minutes. This has now been removed. “Now any intermittent downtime is counted”, said Glotzbach in the blog entry.

The revisions in the SLA mean higher assurance of lesser downtime because Google now counts downtime of any reason, planned or unplanned, into its SLA, the spokesperson said.

She added that what remains unchanged in the SLA is that Google’s services will achieve at least 99.9 percent uptime in any calendar month. If its services drop below 99.9 percent for the month, customers will receive service credits in return. Service credits are days of service added to the end of the customer’s contract.

Glotzbach wrote that Google will, hence, “eliminate maintenance windows from their serve level agreement (SLA)”, and is the first major cloud provider to do so.

Asked to respond to Google’s SLA changes, a Microsoft spokesperson said in an e-mail: “Microsoft Online Services offer the most rigorous financially-backed SLAs. We guarantee 99.9 percent uptime, or we give customers money back.”

When contacted, cloud service providers Amazon Web Services and Salesforce.com declined comment.

HTML editor dumps ‘HTML5’ even as W3C touts it

Two days after the World Wide Web Consortium debuted a flashy new HTML5 badge, none other than the editor of the Hypertext Markup Language standard has dumped the hot tech buzzword.

HTML is the new HTML5,” Ian Hickson, who edits the specification, said in a blog post yesterday. The announcement embodies a more continuous development process that he’s planned for more than a year, but Hickson told ZDNet Asia’s sister site, CNET, Friday that the W3C’s HTML5 badge–which controversially stands for a number of Web technologies beyond HTML–hastened a change that had been planned for later in 2011.

“Now even the W3C is saying ‘HTML5‘ means everything from CSS to font formats, so advocates really were left without anything to specifically refer to HTML. So we asked around, and the objections to the rename were much reduced already, even compared to a week ago, so we went for it,” Hickson said.

The demise of the HTML5 label, though, will only affect one of the two groups that oversees HTML’s development: the W3C remains attached.

Hickson is a Google employee who has shepherded the standard for years, first through an informal group called the Web Hypertext Applications Technology Working Group (WHATWG) and now also with the more buttoned-down World Wide Web Consortium (W3C).

Because Hickson isn’t the sole authority involved, don’t expect the HTML5 standard or term to suddenly vanish. Do expect a more fluid development approach, though.

At WHATWG, there will be no more version numbers attached to new iterations of the HTML, Hickson said. “The WHATWG HTML spec can now be considered a ‘living standard.’ It’s more mature than any version of the HTML specification to date, so it made no sense for us to keep referring to it as merely a draft,” Hickson said.

However, the W3C will continue with HTML5 standardization, reflecting the tensions that result from two different organizations overseeing HTML.

“W3C remains the standards body for HTML5,” spokesman Ian Jacobs said in a statement today. Jacobs is not aware of any changes that Hickson’s decision will have on that process, he said.

Hickson believes, though, that browser makers are better off following a continuously updated “living standard” than snapshots taken of that standard at various points in its progression. He said in the blog post:

In practice, implementations [browsers] all followed the latest specs draft anyway, not the latest snapshots. The problem with following a snapshot is that you end up following something that is known to be wrong. That’s obviously not the way to get interoperability! This has in fact been a real problem at the W3C, where mistakes are found and fixed in the editors’ drafts of specifications, but implementors [browser makers] who aren’t fully engaged in the process go and implement obsolete snapshots instead, including those bugs, without realizing the problems, and resulting in differences between the browsers.

And he told CNET that he’d like for the W3C to adopt his viewpoint, though he seems to view it as very unlikely.

“Separate from the work at the WHATWG, and unrelated to this recent announcement, I’ve been trying to convince the W3C to switch to an unversioned model for a long time. It’s very much at odds with the entire way the W3C is structured,” Hickson said, with standards moving through a progression of drafts and votes through a process driven in part by patent matters.

“Still, I have hope that in time we can evolve the W3C,” Hickson added. “The WHATWG has already moved the W3C towards a more open model, with at least some working groups now operating almost entirely in public mailing lists, and some even allowing anyone to join. That alone would have been unfathomable a decade ago.”

WHATWG kept the HTML standard alive for several years when the W3C was pursuing an incompatible and ultimately unsuccessful sequel called XHTML 2.0. WHATWG has serious clout because it was founded by browser makers who have a major say in which new features arrive for use on the Web and which fall by the wayside.

WHATWG’s position has changed in recent years, though. For one thing, the W3C is actively trying to engage with developers and be a friendlier forum for experimenting with new ideas. For another, Microsoft–newly re-engaged in Web standards development and newly influential with its upcoming Internet Explorer 9 browser–has thrown its weight behind the W3C as the place to get things done.

And not everyone is happy with Hickson’s declaration. One Web developer responded to Hickson:

Maybe something more granular than full point revisions is advised, but a ‘living standard’ is a disaster…Say I want to make sure that 95 percent of my visitors or 70 percent or whatever can use my Web site as designed, with my spending hours coding up fallbacks and all that crap? How do you make a test suite and a browser compatibility chart for a “living standard”? It sounds like HTML is becoming a sort of Wikipedia revision style chaotic nightmare.

Another developer was also disgruntled–but resigned himself to the new development realities: “It feels as if this is just the acceptance of the reality that the pace our industry innovates and develops makes it impossible but to work in any other way apart from a ‘living’ standard.”

This article was first published as a blog post on CNET News.

Schmidt: ‘Adult supervision’ at Google no longer needed

Modern CEOs live on airplanes. But in stepping down from the CEO role to become executive chairman, Eric Schmidt’s travel schedule is about to go into overdrive.

Google’s bombshell announcement on Friday that Schmidt, the company’s second CEO but the first to provide “adult supervision” to Larry Page and Sergey Brin’s world-changing creation, thrusts Schmidt into a role where he won’t see the Googleplex very often. In ceding control of day-to-day operations to Page, Schmidt told financial analysts that he’s preparing to focus on “the things I’m most interested in.” In other words, meet Google’s new schmoozer in chief.

Schmidt will focus exclusively on spreading Google’s message around the world, talking to customers, partners, governments, and businesses thinking about spending money on Google’s products. To a certain extent, that’s what he’s been doing already, but being able to focus his considerable energies on external threats and opportunities might allow Google’s ruling triumvirate to adapt to a world that has found them nearly atop the tech world.

“For the last 10 years, we have all been equally involved in making decisions. This triumvirate approach has real benefits in terms of shared wisdom, and we will continue to discuss the big decisions among the three of us. But we have also agreed to clarify our individual roles so there’s clear responsibility and accountability at the top of the company,” Schmidt wrote in a blog post announcing the management moves.

While Google continues to operate perhaps the finest cash machine ever created on the Internet, one of its main problems over the last few years is that it has been late to realize that the world no longer sees it as a scrappy multicolored Silicon Valley start-up focused on Web search. This is perhaps most evident in Washington, where Google has come under heavy scrutiny in recent years and has faced trouble completing key projects, such as its Google Books settlement with authors and publishers and its proposed acquisition of ITA Software, both of which currently lie in limbo.

“An awful lot of the problems we’ve been having [in Washington] is that people don’t understand what we really do,” Schmidt said, admitting that Google let competitors and critics define the company in the absence of strong messages from Google. That’s about to change, and Schmidt has an excellent place to start as a member of the President’s Council of Advisers on Science and Technology and a prominent supporter of Democratic politicians for years.

Google has also faced resistance from industries towards which it has directed its considerable intelligence, computing horsepower, and resources. One area where Google can use an image makeover is in Hollywood.

Google is trying to acquire film and television content for Google TV and YouTube’s streaming service. So far, it’s been slow going: the major broadcast networks have all blocked their content from appearing on Google TV, and YouTube has yet to get much material from the big Hollywood studios.

In music, Google spent much of 2010 laboring on a cloud music service, multiple sources have told ZDNet Asia’s sister site, CNET. Google must now secure licensing rights from the top four major music labels and numerous publishers. That can be a painful process, but one thing that the film studios and music labels understand and respect is star power and Schmidt is a marquee name in business.

It’s also not hard to imagine Schmidt going on tour with Dave Girouard, president of Google Enterprise, wooing some of the Fortune 500’s biggest companies to move their enterprise IT software over to Google Apps. Schmidt has a pedigree in the enterprise technology world, serving as CTO at Sun Microsystems and CEO of Novell before taking the Google job, and can discuss the needs and wants of enterprise IT managers with the best of them. And he can also push: Schmidt’s efforts to sell Java to the world were considered essential to the spread of that technology.

One thing that will be interesting, however, is whether or not Schmidt can avoid a tendency to stick his foot in his mouth when it comes to discussing hot-button topics related to Google.

He’s been slammed many times in the past for suggesting people should watch what they do on the Internet, change their name to escape past deeds, and turn control of their lives over to computers. In many of those cases, Schmidt appeared to be joking, but in many he didn’t. Any true schmoozer can’t leave a listener confused as to how to interpret his words.

As of Apr. 4, however, Schmidt will be Google’s public representative without having to worry about the nuts and bolts of Google’s payroll or whether or not to approve a new social-networking project that involves implanting chips in the brains of volunteers. This is not a role that either co-founder, as brilliant as they are, is capable of taking on: Brin often appears on Google’s behalf at product-oriented events, but a Page sighting is rare, and neither has much experience discussing Google’s broader issues in public.

Thursday’s announcement felt almost like a high-school graduation, with proud parent Schmidt sending Page out into the world on his own for the first time, confident in his ability to make his way. It would be premature to judge Page’s chances hours after the announcement; while it’s true he’s done this before, Google was a very different place in 2001 than it is in 2011.

Has Schmidt done enough to prepare the brilliant but almost painfully shy Page for his spot leading one of the world’s most important technology companies? One thing will surely help: Schmidt will be able to take much of the public pressure off Page as he circles the globe spreading the gospel of Google.

This article was first published as a blog post on CNET News.

Google allows Iran software downloads

Google has officially allowed Iranian users to download Google Earth, Picasa and Chrome after US trade restrictions were lifted.

People in Iran have been using Google Earth for at least the past two years, despite the export ban on the Google Earth client, according to Google’s Transparency Report, which tracks traffic to and from the company’s front-end servers.

Nevertheless, the search and advertising giant said in a blog post on Tuesday that it had unblocked downloads of the geographical imaging software client, as well as the Chrome Web browser and Picasa photo-sharing software.

“We’re committed to full compliance with U.S. export controls and sanctions programmes and, as a condition of our export licenses from the Treasury Department, we will continue to block IP addresses associated with the Iranian government,” Neil Martin, Google export compliance programs manager, said in the blog post.

A Google spokesman confirmed on Wednesday that the company’s Transparency Report showed that Iranians had been using Google Earth, which requires a software client to operate on a PC or other device. The Transparency Report tool shows a historical graph of traffic between Google servers and users in a chosen country, with the idea that a change in normal patterns of traffic would indicate a blockage.

The Transparency Report graph on Thursday did not show any rise in the flow of data with Iran, as might be expected after the lifting of restrictions.

Google’s spokesman said that “no conclusions” about the number of Iranian users could be drawn from the graph. “Google’s Transparency Report data is normalised and scaled in a way that obfuscates the raw traffic numbers,” said the spokesman. “What this means is that only relative data from a given time period is visible in the graphs. So no conclusions should be drawn about the amount of traffic currently displayed for Iran.”

The spokesman declined to say how many people were using Google Earth in Iran, and said he would not speculate as to how users had downloaded the Google Earth client before it was officially made available.

Web blocking
People can circumvent Web-blocking technologies in a number of ways, including peer-to-peer file-sharing. More elaborate methods include using proxy machines to mask the geographical location of users.

Google uses Web blocking by IP address around the world, to comply with local regulations, the company’s spokesman said.

Iran itself has blocked Google software. YouTube has been blocked by Iranian authorities since Jun. 12, 2009, following demonstrations after a disputed presidential election. Iran may yet decide to block Google Earth, the spokesman said.

“We’re used to seeing global services blocked, unblocked and blocked again–there’s no real way of predicting,” said the spokesman. “Blocking of Web services will be up to the government of Iran.”

Protest movements have used Google Maps as a tool to organise demonstrations, and the spokesman said that Google Earth may be used in the same way in Iran.

“People build all kinds of layers onto Google Earth, and some have a political overlay,” said the spokesman. “People have very creative ways of using [online tools], which is part of the power of the Internet.”

Read more of “Google allows Iran software downloads” at ZDNet UK.

Verizon fires legal shot against Net neutrality rules

Verizon Communications has fired the first shot in the legal war to dismantle the Federal Communications Commission’s new Net neutrality rules.

The phone company Friday filed an appeal in the U.S. Court of Appeals for the District of Columbia Circuit challenging the FCC’s Report and Order on rules dealing with the issue of Net neutrality.

Michael E. Glover, Verizon‘s senior vice president and deputy counsel, said in a press release that the company has been committed to the process of preserving the open Internet but that after careful review of the FCC’s order, it believes that the FCC has overstepped its bounds.

“We are deeply concerned by the FCC’s assertion of broad authority for sweeping new regulation of broadband networks and the Internet itself,” he said in a statement. “We believe this assertion of authority goes well beyond any authority provided by Congress, and creates uncertainty for the communications industry, innovators, investors and consumers.”

After years of debate on the topic, the FCC adopted rules codifying specific Net neutrality principles in late December. The new regulation creates two classes of service subject to different rules: one that applies to fixed broadband networks and one for wireless networks.

The first rule requires both wireless and wireline providers to be transparent in how they manage and operate their networks. The second Net neutrality rule prohibits the blocking of traffic on the Internet. The rule applies to both fixed wireline broadband network operators as well as to wireless providers. But the stipulations for each type of network are slightly different. And finally, the last rule applies only to fixed broadband providers. It prohibits fixed wireline broadband providers from unreasonably discriminating against traffic on their network.

Net neutrality opponents have been voicing their opposition to the rules since they were adopted a few weeks ago. And some Republican Congressional leaders have already pledged to dismantle the new Open Internet rules. Lawyers in D.C. have also been preparing complaints for weeks.

Larry Downes, a consultant, author, and contributor to CNET, said that he finds it odd that Verizon would file its complaint before the official regulations have been published in the Federal Register. But he said he isn’t surprised that Verizon has fired the first legal shot.

“There was little doubt from the Consumer Electronics Show [earlier this month] that this was going to happen,” he said in an e-mail. “By being first, Verizon gets the best possible court. The D.C. Circuit, in addition to being the court that decided the Comcast case, is historically skeptical of FCC efforts to stretch its authority.”

In April, the D.C. Circuit court tossed out the FCC’s August 2008 cease-and-desist order against Comcast, which had taken measures to slow BitTorrent transfers before voluntarily ending them earlier that year.

Since then, the FCC’s authority has been called into question. Some people believe that the agency has no authority to enact these rules. But the FCC asserted in its Open Internet ruling that it does have the authority to impose rules and regulations governing Net neutrality. This lawsuit is clearly the next challenge to that authority.

Verizon’s legal strategy
Downes said that Verizon is taking an interesting legal strategy by filing its complaint in the D.C. Circuit Federal Appeals court, which has special jurisdiction to hear certain FCC cases. Filing in that court is likely a safer bet for Verizon rather than a regular federal district court, which may not have much experience hearing telecommunications cases. What’s more, the D.C. Circuit Appeals court has historically been more favorable toward complaints against the FCC.

But to get it into the D.C. Circuit Federal Appeals court, Verizon has had to do some legal maneuvering. Instead of taking direct aim at the FCC’s new Net neutrality rules, Verizon asserts in its “appeal” that the FCC order changes the terms of existing licenses that Verizon holds to wireless spectrum. So in that regard the company is “appealing” the change to the order rather than initiating a new case that challenges the rules directly, Downes points out. If Verizon initiated a new case, it would have to be filed in a regular federal district court. But because it’s an appeal to existing FCC licenses, the D.C. Circuit Court of Appeals has jurisdiction.

The Court of Appeals may take some time to decide on the motion, which will require detailed briefing and possibly oral arguments, Downes said. In the meantime, Verizon could ask the court to stay implementation of the new rules, which will go into effect 60 days after the FCC posts the new rules in the Federal Register. That is expected to happen any day. Verizon could also ask the court to stay any other proceedings brought by others in different courts against the FCC.

Even if the D.C. appeals court eventually decides the case isn’t in its jurisdiction, it could grant these stays while it is deciding, which could delay action.

The FCC declined to comment on the court filing, but Downes said he suspects that the FCC will move to vacate the appeal on the grounds that the FCC order was a new rulemaking and not a modification to Verizon’s existing licenses. And they will likely argue that challenges to the FCC order should start in federal district court rather than in an appeals court.

Verizon also filed a separate motion today asking the D.C. appeals court to assign the same panel of judges who heard the Comcast vs. FCC case to the Verizon appeals case.

The Media Access project, a nonprofit public interest law firm, says that Verizon is blatantly shopping for a favorable court to hear its “appeal” on the Net neutrality issue.

“Under this bizarre legal theory, virtually every FCC decision would wind up in one court,” said Andrew Jay Schwartzman, senior vice president and policy director of the Media Access Project. “Verizon has made a blatant attempt to locate its challenge in a favorable appeals court forum. The company’s theory assumes that all agency actions changing rules are ‘modifications’ to hundreds of thousands of licenses. This would insure the case remains in the District of Columbia Circuit, and keeps others from seeking review in different courts.”

So far, Verizon’s efforts appear to be about starting a long and time-consuming legal process, which will keep Net neutrality uncertain for some time. So far actual details of the company’s arguments against Net neutrality are thin.

“There’s not much substance yet to their appeal,” Downes said in an e-mail. “Nor does there need to be. They claim that the Net neutrality order exceeded the FCC’s authority, was arbitrary and capricious, violates Verizon’s constitutional rights, and is otherwise illegal. For now, that’s all they have to say. The real legal arguments will come when they brief the case.”

This article was first published as a blog post on CNET News.

CEO shake-up at Google: Page replaces Schmidt

Google shook up its ruling triumvirate Friday, announcing that CEO Eric Schmidt would be taking the role of executive chairman, while co-founder Larry Page will become CEO. Sergey Brin, who has also shared power with the two others, will work on “strategic projects”, Google said.

Schmidt, who was hired by the co-founders to be Google’s CEO in 2001, will focus on external partnerships and business deals starting on Apr. 4, when Page will take over the day-to-day management role. Schmidt said in a blog post that Page, “in my clear opinion, is ready to lead”.

On a conference call originally scheduled to discuss Google’s fourth-quarter results, Schmidt said “I’m going to get a chance to work on the things I’m most interested in,” which will include talking to customers, partners, and the government regulators breathing down his company’s neck.

Page, 37, will actually become Google’s third CEO, though he held the role during the first few years of the company’s efforts. He’ll be tasked with making sure Google toes the line internally and said several times during the call that he’s excited to lead Google at a time when computing is still a relatively new way of life for many people.

Brin will continue to focus on technology products, assuming the title of co-founder, as opposed to his current role of president of technology. “He’s an innovator and entrepreneur to the core, and this role suits him perfectly,” Schmidt wrote in his post.

Brin is currently working on several new products that he didn’t want to discuss, citing criticism that Google has been prone to launching “vaporware” in the past: Google Wave comes most prominently to mind. Schmidt deferred a question about Google’s social strategy to Brin, suggesting that social technologies make up one big area of his focus.

The shake-up comes at a time when Google’s search dominance is unquestioned, and its efforts to expand its business into display advertising and mobile technologies has given it a few more sources of funding for its dreams. However, the company has struggled to confront a new way of obtaining information on the Web–that curated by your friends in social networks–and also must deal with the wandering eyes of several Googlers wondering where the next big stock market payout can be found in Silicon Valley.

Departing employees have also complained that as Google has grown–now with 24,400 employees–it has gotten harder and harder for good ideas to make it up the corporate ladder. Schmidt alluded to that in his statement, suggesting that Google is hoping to become a bit more nimble.

“As Google has grown, managing the business has become more complicated. So Larry, Sergey, and I have been talking for a long time about how best to simplify our management structure and speed up decision making–and over the holidays, we decided now was the right moment to make some changes to the way we are structured,” Schmidt wrote in his post.

In announcing fourth-quarter earnings results alongside the management news, Google said revenue minus traffic acquisition costs amounted to US$6.37 billion, ahead of analyst estimates. Net income for the quarter was US$2.54 billion, or US$2.85 billion, excluding onetime charges. Analysts were expecting earnings per share, excluding charges of US$8.09, and they got US$8.75 from Google.

Investors seemed pleased with the numbers, and they didn’t seem freaked out enough by the management shake-up to react in after-hours trading. Google’s stock rose US$14.63, or 2.33 percent, in trading, after the bell after closing down for the day.

This article was first published as a blog post on CNET News.

Holiday success makes eBay earnings sparkle

High activity during last month’s holiday season, as well as continued strong growth from PayPal, meant that e-commerce giant eBay had some rather nice numbers to report in its 2010 fourth-quarter earnings.

Revenue was up 10 percent year-over-year if you don’t count Skype, which eBay spun off late in 2009, and profits were up 24 percent. Analysts were expecting a profit of 47 cents a share; eBay posted 52 cents.

The company reported “strong holiday shopping momentum” as the year drew to a close, propelled in part by its focus on mobile commerce and areas of the online shopping world that it had largely not yet tapped, like fashion.

“We delivered a strong fourth quarter and a solid year, driven by our customer focus, commitment to technology-led innovation and our operating discipline, which is enabling us to reinvest in growth,” CEO John Donahoe said in a release. “We are driving strong global growth at PayPal and strengthening our core eBay business. And we are innovating quickly in areas such as mobile, which is helping to position us at the forefront of trends shaping the future of shopping and payments.”

Transaction system PayPal continues to grow, eBay reported. It’s adding 1 million new active accounts per month, and nearly half its fourth-quarter revenue was generated outside the U.S. PayPal is also an option in coffee conglomerate Starbucks’ new mobile payment app, potentially exposing it to new customers in the process.

But eBay also has, depending on how you see them, areas that are either weak spots or potential sources of growth. Though it offers deals and discounts to members via e-mail and on its Web site, eBay has not made nearly as much of a move into the current daily-deals craze (fueled by the lightning-fast rise of Groupon) as some other commerce companies. Amazon, for example, acquired fire-sale site Woot last year and made a big investment in Groupon rival LivingSocial, the fruits of which were all over Web chatter today when a LivingSocial deal for Amazon gift cards earned it plenty of positive buzz.

This article was first published as a blog post on CNET News.

Facebook launches new low-tech mobile site

Much of Facebook’s projected growth over the next few years is in regions of the world where an iPhone or Android device is a novelty rather than a staple. Consequently, the company has been making some strategic moves: On Wednesday, Facebook announced a new mobile site optimized for lower-end cell phones and a plan to make it available in many countries without data fees.

“The app provides a better Facebook experience for our most popular features, including an easier-to-navigate home screen, contact synchronization, and fast scrolling of photos and friend updates,” explained a blog post by Mark Heynen, a program manager at Facebook.

Developed in partnership with Snaptu, a mobile development company, the new Facebook site works on more than 2,500 cell phones from the likes of Nokia, Sony Ericsson, and LG. The company has also inked agreements with carriers in countries as varied as Brazil, Canada, Tunisia, Romania, and Hong Kong (currently, none are in the U.S.) to make access to the site free of data charges for 90 days to start. More agreements are on the way, including potentially extended no-fee plans.

This appears to be putting a more official face to a project that Facebook has been working on for some time, launching ephemeral experiments like Facebook Lite for easier access to the social network from slower connections and more low-tech browsers. It’s kept around a more basic mobile site, 0.facebook.com.

Facebook’s mobile initiatives made headlines last year when the company was rumored to be developing a mobile phone or operating system of its own, something that it has obliquely denied.

This article was first published as a blog post on CNET News.

W3C’s new logo promotes HTML 5

Underscoring the confluence of technology, politics, and marketing, the World Wide Web Consortium (W3C) on Tuesday unveiled a new logo for HTML 5.

With the logo, the W3C wants to promote the new Web technology–and itself. The Web is growing far beyond its roots of housing static Web sites and is transforming into a vehicle for entertainment and a foundation for online applications.

The W3C hopes the logo–T-shirts and stickers with it already are on sale–will fuel excitement and interest in the refurbished Web. “In addition to work on the specification, test suites, and useful materials for developers, we seek to raise awareness about W3C technology and to promote adoption of W3C standards,” spokesman Ian Jacobs said.

Curiously, though, the standards group–the very people one might expect to have the narrowest interpretation of what exactly HTML 5 means–instead say it stands for a swath of new Web technologies extending well beyond the next version of Hypertext Markup Language.

And some Web developers aren’t happy about that. Web developer Jeremy Keith wrote today that the W3C just helped push HTML5 “into the linguistic sewer of buzzwordland”.

Here’s how the W3C put it: “The logo is a general-purpose visual identity for a broad set of open Web technologies, including HTML 5, CSS, SVG, WOFF, and others,” the W3C said in the FAQ about the HTML 5 logo, referring to Cascading Style Sheets (CSS) for formatting and graphical effects, Scalable Vector Graphics (SVG) for advanced 2D graphics, and the Web Open Font Format (WOFF) for elaborate typography. “In addition to the HTML 5 logo there are icons for eight high-level technology classes enabled by the HTML 5 family of technologies. The icons can be used to highlight more specific abilities, such as offline, graphics, or connectivity.”

Using “HTML 5” to represent technologies well beyond the standard itself doesn’t sit well with some developers who see a useful role in more precise terms. Bruce Lawson, an employee of browser maker Opera and co-author of a book on HTML 5, has proposed the acronym NEWT–new exciting Web technologies

“Basically: #HTML 5 logo = good thing. But disappointed to see CSS 3 conflated into it,” Lawson tweeted today, pointing to his rather amusingly theatrical YouTube video about it.

His case was likely something of a lost cause, though, even before the W3C itself offered a logo naming a specific standard to stand instead for a range of technologies. Apple, a company with vastly more marketing skill than most, launched an HTML 5 showcase last year that extended well beyond HTML 5–indeed it was probably better classified as a demonstration of new CSS than new HTML. There’s a reason that marketing types preferred the broad definition of HTML 5: it’s hard to get people to understand a long series of acronyms from standards groups. And it seems unlikely Apple’s promotional experts would get excited about an amphibian.

To be fair to marketing department oversimplifiers, it’s hard to keep track just of what the W3C is up to. Web Workers, Geolocation, IndexedDB, Web Sockets–all these are standards that are useful for the next-generation Web but that venture beyond HTML 5, strictly defined.

But Web-development insiders reacted to the logo’s broad definition with scorn, or at least raised eyebrows. Keith’s blog post is titled “Badge of Shame”:

What. A. Crock. What we have here is a deliberate attempt to further blur the lines between separate technologies that have already become intertwingled in media reports…

So now what do I do when I want to give a description of a workshop, or a talk, or a book that’s actually about HTML 5? If I just say “It’s about HTML 5”, that will soon be as meaningful as saying “It’s about Web 2.0”, or “It’s about leveraging the synergies of disruptive transmedia paradigms”. The term HTML 5 has, with the support of the W3C, been pushed into the linguistic sewer of buzzwordland.

And there was more carping:

• “Hmm, wow. I’m thinking a new logo representing ‘the Web platform in a very general sense’ is maybe not really what HTML 5 needed the most,” tweeted John Lilly, Greylock venture partner and former Mozilla chief executive.

• “CSS3 is now ‘officially’ part of HTML 5,” said a sarcastic tweet from Anne van Kesteren, who works on standards at Opera.

• Longtime Web developer Jeffery Zeldman called the logo’s broad definition “misguided“.

• “Nothing wrong with the #HTML5Logo itself, use it if you want, but including #CSS3 and other bits is just wrong and confusing,” tweeted Web developer and HTML 5 fan Ian Devlin.

• And HTML 5 book co-author Remy Sharp asked, “Let’s clear this up, once and for all: does the @w3c intend for ‘CSS3’ to be included as ‘HTML 5’?”

Don’t expect standardization work at the W3C will lose its ultra-precise wording in favor of loosey-goosey marketing terminology. But do expect W3C to promote its broader agenda in more general terms.

Jacobs said in a blog post that the W3C had begun an internal project in 2010 to create a logo for the “open Web platform”–another more general term for today’s constellation of new Web technologies–but put it on hold. Today’s HTML 5 logo came instead from design firm Ocupop, which according to creative director Michael Nieling was developed with all the Web technologies in mind:

The term HTML 5 has taken on a life of its own; there has been significant confusion and debate both within the developer community and in the public at large as to what exactly HTML 5 is when the term is used outside of simply referring to the spec itself. This variability in perception is what inspired the project–a group of developers and HTML 5 evangelists came to us and posed the question, “How can we better communicate all of the technologies and potential that HTML 5 represents?” …and the resounding answer was, the standard needs a standard. That is, HTML 5 needs a consistent, standardized visual vocabulary to serve as a framework for conversations, presentations, and explanations moving forward…

Nieling himself said, though, that the designers don’t get the last word about what exactly the logo means

“I am confident that we’ve provided a very clear and effective baseline of vocabulary for HTML 5,” he said. “The syntax and ultimate meaning is up to the community.”

This article was first published as a blog post on CNET News.

Facebook rethinks data-share option

Facebook has temporarily disabled a developer option that could have resulted in the disclosure of personal information.

The social networking site said in blog on Tuesday that it was rethinking an option that would have allowed developers of Facebook apps to gather contact information, including home addresses and mobile phone numbers.

“Over the weekend, we got some useful feedback that we could make people more clearly aware of when they are granting access to this data,” Facebook said in a blog post. “We agree, and we are making changes to help ensure you only share this information when you intend to do so. We’ll be working to launch these updates as soon as possible, and will be temporarily disabling this feature until those changes are ready.”

Read more of “Facebook rethinks data-share option” at ZDNet UK.

Network design, dollars impact public Wi-Fi access

When users encounter network congestion while surfing public Wi-Fi hotspots, the problem can be attributed to the cost of network installation and provision as well as the size of the device accessing the Web, according to analysts.

Bryan Wang, Asia-Pacific associate vice president of connectivity research at Springboard Research, told ZDNet Asia that the challenge of installing public Wi-Fi is less about a technical issue than it is about efficiency in terms of utilization and cost.

While more hotspots can be added to minimize Wi-Fi congestion when network traffic gets heavy, this will result in low utilization during low-peak traffic periods, Wang said in an e-mail interview.

Although the extra hotspots can be switched off, from a business standpoint, organizations providing the public access are inevitably still concerned about bandwidth and maintenance costs incurred to keep networks up and running, he said.

J. Ramesh Babu, director of managed services at Cisco Systems Singapore, added that another reason for Wi-Fi gridlock may lie with the fact that the use of legacy or ageing equipment is still prevalent today.

Using old client technology will slow down network performance, especially with the amount of rich media and data being transmitted to mobile devices, Babu said in an e-mail.

He added that challenges associated with the deployment of public or outdoor Wi-Fi networks are surmountable with the right network configurations and tools, as well as proper planning. For example, he suggested that venue proprietors should install a solution that can support a wide variety of devices such as Wi-Fi Internet Protocol phones, laptops and mobile phones that support dual-mode capabilities, and are able to run on both cellular and Wi-Fi networks.

He noted that the range of wireless access point coverage can be affected by structures such as walls, cubicles and metal elevator shafts. There can also be interference due to cordless telephones, Bluetooth and other wireless devices, he added.

The W-Fi service provider can resolve this via tools that can detect and automatically mitigate RF (radio frequency) interferences, by configuring the wireless network to work around the interference source.

Consumers expect Wi-Fi access
Babu noted that access to “pervasive Wi-Fi is a logical expectation” of consumers today, since more enterprises are now mobilizing employees, business partners, customers, and even corporate assets.

Ovum’s senior consultant, Craig Skinner, concurred. He noted that the growing use of mobile devices such as tablets and smartphones, alongside laptops and netbooks, is fueling demand for Wi-Fi bandwidth.

The Ovum analyst added that public Wi-Fi networks, if designed to do so, can handle high-density traffic from a mix of mobile devices including slates and smartphones, as well as larger devices such as laptops. Reiterating Wang’s views about resource requirement, Skinner said the question, however, goes back to how the costs of building such a network are covered.

In an e-mail interview, he explained that it is not “realistic” to expect the same level of network performance at an open access location, where the network is funded by public money, as that provided at a tech conference venue.

He described that oftentimes, public Wi-Fi access is provided free as an incentive to attract consumers to a venue or persuade companies to hold events in one venue over another. But ultimately, the infrastructure and operating costs of providing free public access must still be paid for, Skinner said.

Because users are not directly paying for the service, there is less incentive for the venue operator to spend on providing a high quality of network service, particularly for occasions when heavy usage is expected, he explained.

In addition, the analyst noted that the size of the device–accessing the Wi-Fi network–also plays a part in the quality of access. He explained that the size of the antenna and the power level it transmits, can limit both the range of network detection and level of interference the device can continue to operate in.

Hence, when smaller devices like handsets seem to “struggle” to connect in a high congestion situation, it is a result of design tradeoffs in terms of the size, and not because the network has cut them off, Skinner continued.

Quizzed if consumers’ expectations of Wi-Fi service have heightened–given the rise of mobile devices, mobile workforce and even Wi-Fi access onboard airplanes–Springboard’s Wang said Wi-Fi is not a mobility solution.

“Wi-Fi cannot support quick hand over from one hotspot to another,” he said. “Because of its limited coverage [compared] with 3G or WiMax, if we use Wi-Fi as a mobility solution when we are moving around, it will need a lot of network handover which will take up a lot of network resources. Therefore, Wi-Fi is not a practical technology to be used as mobility solution.”

Google Apps makes a new promise: No downtime

Anyone buying into a Web-based service knows about the SLA–the service level agreement. That’s where the Web company makes a promise about uptime, the amount of time that the service will be up and running without any service disruption.

In most cases, there’s a clause in the agreement that allows for scheduled downtime for maintenance. Now, Google–in an effort to further set itself apart from competitors–is removing that downtime clause from its customers’ SLAs.

From here on out, any downtime will be counted and applied toward the customer’s SLA. In addition, the company is amending the SLA so that any intermittent downtime is counted, as well, eliminating the previous provision that any downtime less than 10 minutes was not counted. In a blog post, Google Enterprise product management director Matthew Glotzbach wrote:

People expect email to be as reliable as their phone’s dial tone, and our goal is to deliver that kind of always-on availability with our applications… In 2010, Gmail was available 99.984 percent of the time, for both business and consumer users. 99.984 percent translates to seven minutes of downtime per month over the last year. That seven-minute average represents the accumulation of small delays of a few seconds, and most people experienced no issues at all.

Read more of “Google Apps makes a new promise: No downtime” at ZDNet.

Google answers critics on HTML5 Web video move

Google responded to critics of its decision to drop support for a popular HTML5 video codec by declaring that a royalty-supported standard for Web video will hold the Web hostage.

Much has been made last week of Google’s decision to end support for the widely used H.264 video codec as it implements a key portion of the collection of technologies known as HTML5 in its Chrome browser. Mike Jazayeri, a product manager for Google, wrote a blog post last week responding to some of the more common critiques of its plan to support only the WebM video codec standard within the <video> tag.

“Our choice was to make a decision today and invest in open technology to move the platform forward, or to accept the status quo of a fragmented platform where the pace of innovation may be clouded by the interests of those collecting royalties,” Jazayeri wrote. “Seen in this light, we are choosing to bet on the open web and are confident this decision will spur innovation that benefits users and the industry.”

Google’s decision to support WebM only splits the browser community roughly in two. Apple and Microsoft support the H.264 codec as the technology to be used in the <video> tag, while Mozilla, Opera, and now Google have gotten in line behind WebM, which Google turned into an open-source project after acquiring the VP8 technology at the heart of WebM from On2 Technologies last year.

The main issue is that the five organizations involved in the HTML5 standards-setting process were simply not going to agree on a standard codec for the <video> tag, Jazayeri wrote. Apple and Microsoft are members of the patent pool that licenses the H.264 code, known as MPEG-LA. And Mozilla and Opera are smaller organizations opposed to paying the licensing fees for that technology.

“To companies like Google, the license fees may not be material, but to the next great video startup and those in emerging markets these fees stifle innovation,” Jazayeri wrote in the post. “We believe the web will suffer if there isn’t a truly open, rapidly evolving, community developed alternative and have made significant investments to ensure there is one.”

Google’s decision has caused consternation among video producers worried about having to support two different video standards, since they have no choice but to support devices that play H.264 video–nearly all modern devices–for years to come. Hardware decoders for the H.264 codec, which are all but essential for mobile devices with constrained battery life, are widespread while hardware decoders for WebM are just now emerging.

Critics have also pointed out that the decision might actually cause video sites to rely on plug-ins to display video when the whole point of the <video> tag was to give Web publishers a way to move beyond the limiting nature of plug-ins.

Google, with a huge repository of video in YouTube, understands the concerns about maintaining two different video standards, Jazayeri wrote. However, they were probably going to have to do so anyway if they wanted to serve video to Firefox users, who constitute roughly 22 percent of the market, he wrote. (Opera’s market share is around 2 percent.)

Jazayeri did not directly address the issue of Google’s support for WebM ensuring Flash would live for years, other than to say that Chrome would continue to support that plug-in.

The post is likely to do nothing to mollify those who think Google is making a huge mistake, but it does lay out the company’s thinking in a much more detailed way than its original post provided.

“Bottom line, we are at an impasse in the evolution of HTML video,” Jazayeri wrote. “This is why we’re joining others in the community to invest in WebM and encouraging every browser vendor to adopt it for the emerging HTML video platform (the WebM Project team will soon release plugins that enable WebM support in Safari and IE9).”

It’s fair to say this debate is far from over.

This article was first published as a blog post on CNET News.

Charity 2.0 puts users as force behind social change

It is not just because it has speed, reach and costs literally nothing to spread awareness and raise funds, that the nonprofit world is leveraging social media. The biggest potential to effect social change is activated by putting popular Web 2.0 tools and technologies in the hands of users, as they are then empowered to dedicate themselves to social causes, charity 2.0 evangelists say.

In an e-mail interview with ZDNet Asia, Paull Young, director of digital engagement at Charity Water, said that social media is “all about people more so than technology”. The New York-based nonprofit organization raises money to build wells to provide clean drinking water in developing nations.

Therefore, charities need to shift from being in “command-and-control”, to using social media tools to enable their supporters to take up their message and mission and spread it to the world, he explained.

Stressing that the donor experience is paramount, Young said a humanitarian organization must be able to come up with ways to expand the experience its donors have with Web 2.0 tools. For instance, Charity Water links fundraisers and donors to the specific water project they contributed to, with photos and GPS coordinates in Google Maps.

Social media: strategy, not solution
Social media is a major channel that Charity Water uses to communicate with its audience, said Young. It enables “both our brand and our supporters to spread our message, share content to educate about the water issue, and importantly, fundraise”, he pointed out. Charity Water’s Twitter account has a significant following, with more than a million subscribers.

Social media is also a critical component of Charity Water’s digital strategy, Young added. According to him, Charity Water raised over US$9 million in 2010, of which 70 percent came via digital channels. He also added that its online fundraising platform, where people create their own Web pages to campaign for donations, generated nearly US$6 million in 15 months, with over 6,000 fundraisers receiving 72,000 individual donations.

It is important to remember that “at the end of every Twitter account and Facebook profile is a real person, not a wallet”, Young emphasized. Social media merely provides a platform; what charities must focus on is how to produce inspiring content to communicate, engage and build trust with the user audience, he said. This authentic connection, he noted, can then lead to long-term donors and life-long supporters.

Siegrid Saldana, community manager at Give.sg, a Web portal that links individuals and communities with Singapore-based charities and helps facilitate online fundraising and donations, concurred. In an e-mail, she said social media is a “great and amazing tool” for nonprofit initiatives because it is quick, easy and cost-effective to spread the word and expand their reach.

But at the end of the day, the effort an individual or an organization puts to further their cause is not just about social media, she noted.

“Social media in virtual philanthropy is about empowering everyday people, making it easier for them to help their favorite nonprofits to magnify the social impact and make a difference,” Saldana pointed out.

To exemplify the power of social media, Saldana referred to the Charity Bike ‘N’ Blade 2010 event. Its organizers set up a fundraising page on Give.sg, and using social networks to publicize the event, raised S$380,000 (US$293,726) in total–nearly triple the target.

Playing for funds
The mobilizing and fundraising power of users is also evident in social gaming.

When Farmville creator Zynga held two campaigns of Sweets Seeds for Haiti–for the victims of the Haitian earthquake in January 2010 and a school building project–the company raised more than US$1.5 million within five and 17 days respectively. Farmville players buy using Farmville currency the sweet seeds–limited-edition sweet potato crops that can boost their game score–with 100 percent of the proceeds going to providing food for Haitian schoolchildren.

A Zynga spokesperson said in an e-mail that social games have the power to make a significant social impact. In-game campaigns featuring social goods, such as Sweet Seeds for Haiti, rally players together to contribute to a good cause and at the same time, enjoy game play. Zynga’s base of around 215 million makes the social game “the largest possible platform to effect positive and massive change by users”, she added.

Asia to see greater Net neutrality discussion

Once a topic debated more vigorously outside the region, Net neutrality is seeing more interest in Asia due to unique government initiatives and rapid Web infrastructure development. In contrast, the topic has lost steam in the United States.

In an e-mail interview, Craig Skinner, senior consultant at Ovum, told ZDNet Asia that a series of factors are coming together and heightening the importance of discussions around Net neutrality. These components include next-generation networks (NGN) and convergence, bandwidth-intensive Web 2.0 applications such as video, and the erosion of traditional voice revenues for telecos, forcing market players to seek alternative revenue streams which put them in direct competition with network-independent Web services providers.

Skinner iterated that these factors have all led to a divergence–and dilemma–between costs and revenues for the network provider with regard to managing different types of Web traffic, for instance, video versus peer-to-peer (P2P) file sharing.

Proponents of Net neutrality support the principle that all online services and content should to be treated equally, so Internet service providers (ISPs) cannot discriminate against certain services or content by prioritizing or impeding access to any particular site or application through blocking or slowing bandwidth. For example, carriers cannot deliberately degrade traffic performance of competing Web sites or charge other site operators a premium in exchange for preferential treatment, whilst giving priority to traffic running to and from their own sites.

Cost versus revenue
Following years of discussions, the US Federal Communications Commission (FCC) last December finally made Net neutrality regulations official.

However, Skinner stressed that regulators must not underestimate the pressure network providers face in balancing cost and revenue.

The Melbourne-based analyst noted that, in Asia, there has been greater reliance on competition to mitigate the impacts of ISP discriminatory behavior. He pointed to countries such as Australia and Singapore as “interesting” case studies, where he said government initiatives on broadband networks have included open access as a fundamental requirement, enabling network neutrality upfront.

Singapore’s ICT regulator, the Infocomm Development Authority (IDA), released its Net Neutrality Consultation Paper on Nov. 11 last year to seek views and comments from the industry and public regarding regulatory policies on the issue.

An IDA spokesperson said in an e-mail interview that the regulator’s stance on Net neutrality is important to provide guidance to the industry on what is acceptable and what is not. For example, the blocking of legitimate content such as voice-over-Internet Protocol (VoIP) services is prohibited, she said.

Beyond regulatory requirements, however, she added that IDA is prepared to allow ISPs to differentiate their service offerings. For instance, an ISP can enter into an agreement with a provider of time-sensitive telemedicine services in order to provide high-performance Internet access to consumers.

According to the spokesperson, IDA’s approach to Net neutrality is one that is “pragmatic and balanced”. The proposed framework is built on three prongs: promoting competition, increasing transparency, and ensuring a reasonable quality of Internet access, she explained.

Growing debate in Asia
Ben Cavender, associate principal from China Market Research Group (CMR), said he expects to see more discussion on Net neutrality in Asia. He told ZDNet Asia in an e-mail that several markets in the region have been rapidly developing their network infrastructure, ISPs and content hosts.

Furthermore, Cavender observed that Web traffic and services that need high bandwidth in Asia are on the rise, alongside a growing population of consumers who are now more willing to spend to access online information.

He noted that China’s recent crackdown on what it deemed to be unapproved, or not state-owned, VoIP services is not directly related to Net neutrality but signals the Chinese government’s stance and authority to step in and enforce regulations concerning Internet traffic management.

While the discussion on Net neutrality is set to intensify across the Asia-Pacific region, the debate has lost steam in the United States, according to Julian Wright, associate professor of the Department of Economics at the National University of Singapore.

In an e-mail interview, he said that Net neutrality had escalated into a political issue in the U.S. where battle lines have been drawn between the Republican and Democrat camps. And the debate is likely to get messier in the near future, Wright said.

Some have questioned whether the FCC has the authority to enforce its rules, he noted, adding that further complications could emerge when the reality of executing the concepts of Net neutrality surfaces with the possibility of unintended consequences.

Last week, open Web advocates criticized BT for offering a new service, which charges content companies a premium for higher-quality distribution of videos, that they said would create a two-tier Internet that goes against the ‘all bits are equal’ rhetoric of Net neutrality.

Should MySpace just die already?

It goes without saying that News Corp. is familiar with the process of cancellation–it’s the parent company of the network that controversially axed Firefly and Arrested Development, after all.

But when it comes to getting rid of a social networking site, things seem to be a little bit more complicated: After this week laying off nearly half the people who work at MySpace, News Corp. seems to be in limbo over what to do about once-cool social network. Rumor has it that the Rupert Murdoch-helmed media company is shopping it to potential buyers, and at least one analyst is speculating that if a new owner isn’t found by the summer, MySpace will just be shut down entirely.

Shutting MySpace down altogether seems pretty unlikely, considering that the site does technically still have millions of users, and an extensive redesign geared toward transforming it into a pop-culture media-sharing site may have generated a mild uptake in interest. It also would be more expensive to close up shop than to sell for even a bargain-basement price. But consider this: in the grand scheme of things, MySpace is probably better off dying a quick death now than prolonging its painful slide into irrelevance any longer.

A big media company’s digital strategy has long since shifted from owning and operating a social network, to finding the best way to get its message and brand through the channels of communication offered by the social-media sites that have already proven successful–Facebook, Twitter, YouTube, and to a lesser extent Foursquare. So News Corp., it seems, is eager to get MySpace off its hands: “Unofficial rumors are another round of layoffs happen in March,” someone claiming to be a MySpace employee posted on Reddit. “My expectation is that MySpace is being prepped to sell within the next few months and that process will include more restructuring.”

It’s going to be difficult either way, and there is nothing pleasant about the idea of a few hundred more people losing their jobs or the feeling of demoralization that would be sure to hit the people who spent years building MySpace. But one of the smartest lessons from recent trends in Web development has been that if something online doesn’t work, there’s not as much shame in shuttering it as you’d think. Facebook has built, released, and then cut numerous features. Microsoft, in its transition from MSN to Windows Live, has cast off a few products and offered residual users some options for exporting their content.

There’s never been a parallel example of a community site closing on the scale of MySpace. News Corp. could pioneer this by finding a clean and user-friendly way to wind things down, giving members the option to export content and assisting with job transitions for employees. If executed well, it could be held up as a landmark example of when and how to wind things down. Because arguably, as long as the brand name is there and the legacy is there, MySpace is going to be held back by the basic fact that, well, it’s MySpace. Its new incarnation as a pop-culture hub is even further focused on being an arbiter of cool than its old social-networking model was. The tarnished name MySpace, even with a newly designed logo, is getting in the way. An edgier re-branding strategy could’ve involved changing the name entirely and producing a wholly new site, but News Corp. missed that boat.

Besides, though it may be cheaper and seemingly more humane, selling MySpace and keeping it alive under the auspices of a new parent company would be unlikely to generate the turnaround that MySpace has sought after multiple rounds of layoffs and multiple shifts in design and focus. A sale would likely look something like AOL’s sale of Bebo, a social network that had proven to be an even more ignominious purchase than MySpace was to News Corp. Never much of a powerhouse in the U.S., Bebo sold to AOL in early 2008 for US$850 million–a price tag that seemed ridiculous even then, and a much higher figure than the US$580 million that News Corp. spent on MySpace parent company Intermix in 2005. Last year, with Facebook the clear winner in social-networking, AOL sold Bebo for pennies to a private-equity firm. Its fate is unclear.

There’s a difference, though. At the time, AOL was struggling all-around and arguably didn’t offer much to Bebo. In contrast, the kind of resources that News Corp. offers MySpace should not be underestimated. The moment the site is sold, it will weaken or even sever a lifeline to a powerful media company that has, without a doubt, offered MySpace many opportunities for partnerships, advertisers, content, and other perks. Removing MySpace from a respected media company won’t turn it into a start-up–it’ll just hasten its decline and potentially make things miserable for the employees who remain.

But the ability to efficiently wind it down, tie up loose ends, and distribute users to appropriate greener pastures–perhaps bringing some new pop-culture and content start-ups into the spotlight in the process–could prove to be MySpace’s greatest innovation in the end.

It’ll probably have less backlash than canceling Arrested Development, to boot.

This article was first published as a blog post on CNET News.

2010 PC growth sees slowdown, tablet cannibalization

Following its modest growth last quarter, the PC market saw its strongest quarter of the year, while managing to miss the expectations of research firms IDC and Gartner.

According to the Quarterly PC Tracker Survey released Thursday by IDC, overall worldwide PC shipments grew 2.7 percent year-on-year during the fourth quarter, with Gartner reporting a slightly larger 3.1 percent as part of its quarterly report. Both numbers missed the firms’ expectations, which IDC had predicted at 5.5 percent and Gartner at 4.8 percent.

IDC said that one of the big reasons for the “modest” gains centered around PCs getting competition from tablets like Apple’s iPad, as well as people being happy with computer hardware they already own. That trend is expected to continue into the new year, the report said. Gartner had similar sentiments, pointing to the iPad, along with other consumer electronics like game consoles cutting into the PC’s turf.

There were, however, some standout numbers and market share changes among the top hardware vendors. Making a comeback, IDC had Dell bouncing back to the No. 2 spot in total PC shipments during the fourth quarter, ousting Acer, whose drop IDC attributed to poor sales of mini notebook PCs. Gartner, on the other hand, kept Dell in No. 3, putting it about 1 percent below Acer in terms of its fourth quarter market share and praising its timing on refreshing its lineup of professional PCs.

The reigning king of market share and overall shipments among the top five PC makers during both the year and the quarter, continued to be Hewlett-Packard. Even so, IDC said HP had 5 percent decline in shipments in the U.S. and 1 percent worldwide. Gartner painted a similar picture, saying the company’s professional business had grown, as had its sales in Europe, the Middle East, and Africa. However, “weak” consumer PC sales in the U.S. as well as difficulty breaking the Asia-Pacific region had offset the company’s growth.

Shining above some of its competitors, Lenovo saw a 21.1 percent year-on-year growth worldwide, which IDC analyst Jay Chou told ZDNet Asia’s sister site, CNET, could be attributed to the company’s reach in both consumer and commercial businesses. Chou also lauded Lenovo’s business as being “geographically balanced”. Toshiba too saw double-digit growth, shipping 12 percent more PCs than it did during the same time last year, according to IDC.

Apple, which is recorded as part of Gartner’s U.S. vendor report, came in just under Toshiba in terms of fourth-quarter shipments, though bested it and all the rest of the companies on year-on-year growth at 23.7 percent. In fact, Toshiba and Apple were the only two vendors in Gartner’s top 5 to increase shipments in the U.S. year-on-year.

Going into 2011, Chou says that “consumer fatigue” for products like mini notebooks, along with “softening demand in Asia” and other parts of the world could cut into the firm’s predictions for a growth of 10 percent over the course of the year. But that “aggressive competition” could bring the market back up in the last two quarters. That softening demand Chou was referring to is the Asia-Pacific region (excluding Japan), increasing 7 percent during the fourth quarter, which IDC notes is the first single digit growth quarter since the first quarter of 2009.

This article was first published as a blog post on CNET News.

Assange hearing set; WikiLeaks vows more cables

Wikileaks editor Julian Assange has been given a court date for a hearing into whether he will be extradited to Sweden to face questioning on sexual assault charges.
Assange appeared at Belmarsh Magistrates’ Court in south London on Tuesday for a management hearing into the extradition case. After hearing submissions, Judge Nicholas Evans set the date for the extradition hearing itself for Feb. 7 and 8.

The defence team asked that Assange’s bail conditions be relaxed on those dates to make it easier for him to attend court in Woolwich, to which the judge agreed. Under the terms of his bail, Assange must keep to a curfew at Ellingham Hall in Suffolk, home of Frontline Club founder Vaughan Smith, and wear an electronic tag. On the 6 and 7 February, he will be allowed to stay at the Frontline Club in London.

Read more of “Assange extradition hearing gets court date” at ZDNet UK.

Google yanking H.264 video out of Chrome

Google just fired a broadside in the Web’s codec wars.

With its alternative WebM video-encoding technology now entering the marketplace, Google announced plans today to remove built-in Chrome support for a widely used rival codec called H.264 favored by Apple and Microsoft. The move places Google instead firmly in the camp of browser makers Mozilla and Opera, who ardently desire basic Web technologies to be unencumbered by patent restrictions.

“Though H.264 plays an important role in video, as our goal is to enable open innovation, support for the codec will be removed and our resources directed towards completely open codec technologies,” said Mike Jazayeri, a Google product manager, in a blog post.

A codec’s job is to encode and decode video and audio, a technologically complicated balancing act. Codecs must reduce file sizes and enable streaming media that doesn’t overtax networks, but they also must preserve as much quality as possible–for example by trying to discard data that the human senses won’t miss much and cleverly interpolate to fill in the gaps.

One big change coming with the new HTML5 version of the Web page description language is built-in support for video; most Web video today employs Adobe Systems’ Flash Player plug-in, which uses H.264 and other codecs under the covers. Although HTML5 video has promise, disagreements in the W3C standards group have meant the draft standard omits specifying a particular codec. Chrome was the only browser among the top five to support both WebM and H.264, but now Google has swung its vote.

Google’s move triggered flabbergasted glee among advocates of the “open Web”–one that employs open standards and shuns patent barriers. “Ok this is HUGE, Chrome drops support for H264,” said Mozilla developer Paul Rouget in a tweet.

But not everybody is so happy. Don MacAskill, chief executive of photo- and video-sharing site SmugMug, bemoaned the move. “Bottom line: Much more expensive to build video on the Web, and much worse user experience. And only Adobe wins,” he tweeted. “I want WebM. Badly. But I need time for hardware penetration to happen…This means the cheapest way to develop video on the Web is to use Flash primarily. Before, we could do HTML5 with Flash fallback.”

H.264, also called AVC, is widely supported in video cameras, Blu-ray players, and many other devices, but it comes with significant royalty licensing fees from a group called MPEG LA that licenses a pool of hundreds of video-related patents.

WebM, though, has been an open source, royalty-free specification since Google announced it last May. It comprises the VP8 video codec Google got through its acquisition of On2 Technology and the Theora audio codec associated with an earlier and otherwise largely unsuccessful royalty-free codec effort.

It’s catching on–for example with smartphone chip support from Rockchip announced last week. Hardware decoding means computing devices can decode WebM faster and without quickly sucking batteries dry. And Adobe has pledged to build VP8 support into a future version of Flash Player.

The move spotlights the role Google has earned in the Web development world by building its own browser. Chrome, which now accounts for 10 percent of browser usage worldwide, according to analytics firm Net Applications, is a vehicle Google is using to try to promote its own agenda on the Web.

One big part of that is speed–fast page loads, fast graphics, fast encryption, fast JavaScript, and more that helps expand activity on the Web. But there are plenty of cases where Google uses Chrome to advance favored standards such as WebGL for 3D graphics, Web SQL and Indexed DB for offline data storage, and WebM for HTML5 video.

Some Web developers including YouTube have begun embracing HTML5 video. But because the standard is mute on the issue of a particular codec, and because browser support can’t be counted on, Web developers typically rely on Flash, which is installed on the vast majority of computers in use today.

Apple, with its own technology agenda to push, is keeping Flash off the iPhone and iPad despite Adobe’s attempts to reengineer it for the low-memory, anemic-processor, battery-constrained world of smartphones. For video, those devices rely on video encoded directly with H.264.

Adobe has become a major Google ally since Apple began taking a very hard-line stance against Flash in 2010. Google has heavily promoted Adobe’s mobile Flash agenda and built its Flash Player directly into Chrome. Adobe gave WebM a big boost by building it into Flash.

The partnership illustrates the pragmatic, political limits to Google’s open-Web advocacy. Flash Player is proprietary software, and building it into Chrome certainly helps preserve its relevance.

“If Google is dropping H.264 because their ‘goal is to enable open innovation,’ why not also drop support for closed plugins like Flash?” tweeted Daring Fireball Apple-watcher John Gruber.

One big uncertainty for WebM is the intellectual property purity of WebM. Google proclaimed a royalty-free codec, but that didn’t stop MPEG LA from saying it’s considering offering a VP8 patent pool license. “We assume virtually all codecs are based on patented technology…MPEG LA doesn’t favor one codec technology over another; we are like a convenience store that offers patent licenses for any number of codecs as a service to the market,” said MPEG LA Chief Executive Larry Horn last May.

More than half a year after Google released the software, though, no new pools or patent litigation has emerged, and WebM has attracted new allies. That doesn’t mean litigation might not be waiting in the wings: “A codec is like a mechanical device with hundreds of parts. Any one or more could be the subject of a patent,” said Steven J. Henry, an intellectual property attorney at Wolf, Greenfield & Sacks, and patent holders may wait for years before “springing the trap.”

So far, that’s a theoretical concern, though, and Mozilla’s then-Chief Executive John Lilly said last year, “Right now we think that it’s totally fine to ship, or we wouldn’t ship it…We’re really confident in our ability to ship this free of encumbrances.”

It’s possible Apple and others could embrace WebM. Microsoft has refrained from glowering too harshly on WebM even as it’s issued an H.264 plug-in for Firefox users on Windows. But even if a change of heart occurs today, it will take a long time for tech giants like Apple and Microsoft to regear.

This article ws first published as a blog post on CNET News.

New API a harbinger of future Quora apps

Quora, an increasingly popular question-and-answer site with a social networking angle, has released an application programming interface that opens the door for third-party software to use the service.

The API, announced last week, has very limited features and is only an alpha release that the company doesn’t promise will remain stable. But it’s an important milestone nevertheless for the company as it charts a course through the complexities of building a business on today’s Net.

That’s because an API, if rich enough, means people using a service don’t necessarily have to use that services’ Web site. For example, Twitter’s API is powerful enough that many people employ software such as TweetDeck or Seesmic to use the short-message service. That helped Twitter grow fast into something of a utility on the Net and saved the company money on Web servers that show pages to visitors. But it also means the company can’t as easily choose one obvious Web business model, showing online advertisements–and Quora’s subject-specific discussions could be a nice match for targeted advertising.

Of course, a full Quora API doesn’t preclude a Web ad business model. Enabling third-party use of the service could accelerate its growth, and if the site is still compelling enough to use directly, a stronger advertising business could follow. And fast growth is important for the site: Facebook, the social network where millions already spend a lot of time online, has a question and answer service of its own.

So far, there’s not enough of an API to bypass Quora’s Web site. According to a post by Quora engineer Edmond Lau, the API exposes information about the number of people a Quora user follows, how many people follow that user, and how many inbox messages and notifications the Quora user has. So there’s not enough at present to build a full-fledged Quora app that would have let people do things like publish and answer questions, follow new people, and vote on the merits of a various answers.

That’s enough to help out programmers, though. Lau specifically pointed out Andrew Brown’s Chrome extension and Jason Wiener’s Firefox extension as applications that would be able to use the API.

Those programmers were happy with the move. “Awesome work Edmond! Thanks again for the help,” Wiener said in a response to Lau’s post. And Brown said, “This is a great step forward for the developers community (and soon the Quora developers community). I’m excited to see what other products come from this.”

It’s important that Quora began adding an API, but it’s not a big suprise.

“When there are enough users and content on Quora that an API would be really useful, we’ll almost certainly add one,” said Quora employee Charlie Cheever in a Quora question in December 2009.

“For right now, we’ll probably focus on the Web interface since that’s how we think most people will use the product, at least to start. Another reason we probably won’t do an API for a little while is that the interface into the product is changing frequently in big ways right now,” he said, “and APIs that aren’t stable are hard to use effectively.”

North Korean Twitter, YouTube accounts hijacked

The Twitter and YouTube accounts held by the North Korean government were hijacked over the weekend and used to post messages critical of the regime and mocking North Korea leader Kim Jong-Il’s heir apparent, Kim Jong-Un.

The official Twitter account for North Korea posted messages on Saturday, the day of Jong-Un’s birthday, calling for an uprising and criticizing him for reportedly hosting lavish parties while North Koreans starve, Reuters reported.

 

YouTube animation mocking Kim Jong-Un

Meanwhile, an animation appeared on the regime’s YouTube channel the same day showing Jong-Un mowing down impoverished women and children in a sports car, the report said. The posts and video were removed but another copy of the video was still accessible.

Members of a North Korean Internet forum, DC inside, have claimed responsibility for the prank, according to reports.

The hijackings come as North and South Korea prepare to begin talks at the end of the month. Last November, a group of South Koreans were killed on Yeonpyeong island during an exchange of artillery fire and in March a South Korean military ship was torpeadoed.

Net neutrality ignores business users

commentary The net neutrality debate has been hijacked by an argument about consumer and intellectual property rights. As usual, the needs of business users have largely been sidelined, says Nick White.

The recent BT launch of Content Connect, allowing ISPs to charge content providers, has sparked allegations of a two-tier internet and reignited the heated debate over so-called network neutrality.

Even though the issue of net neutrality has been simmering for some time, it is often misunderstood.

Read more of “Net neutrality ignores business users” at ZDNet UK.

Hotmail’s recent message loss hiccup explained

A service bug that left a group of Windows Live Hotmail users without access to new messages and entire folders for days has been explained and remedied against future instances.

Writing on the Windows Team Blog, Mike Schackwitz of the Hotmail team says the problem stemmed from an error with an automated script that Microsoft uses to test the service for errors in every day usage. Part of the script’s function is to clean its tracks once it’s done creating test accounts, but this time around the testing jumped the test group and went to real user accounts.

The good news, at least, is that the data is still there. “Please note that the email messages and folders of impacted users were not deleted; only their inbox location in the directory servers was removed,” Schackwitz said. The empty mailboxes those who were affected saw when logging in were made to compensate for the fact that their account didn’t match up with Hotmail‘s database. “This is why the accounts received the ‘Welcome to Hotmail’ message,” Schackwitz explained.

That bad news is for anyone who was affected by the bug and didn’t log in during the time it was being fixed, Schackwitz said. For those people, any messages sent would bounce back to the senders as if the account was shut down.

The script bug affected 17,355 users–16,035 of which Schackwitz said had their accounts fixed a day after the company first began addressing the issue. The other 1,320 took another three days to get sorted out.

In order to keep a bug like this from happening again, Microsoft is splitting up its service testing accounts from the set of normal user accounts, as well as adding a service status to its support forums and bug reporting tools.

This article was first published as a blog post on CNET News.

DOJ sends order to Twitter for Wikileaks-related account info

The U.S. Justice Department has obtained a court order directing Twitter to turn over information about the accounts of activists with ties to Wikileaks, including an Icelandic politician, a legendary Dutch hacker, and a U.S. computer programmer.

Birgitta Jónsdóttir, one of 63 members of Iceland’s national parliament, said on Friday afternoon that Twitter notified her of the order’s existence and told her she has 10 days to oppose the request for information about activity on her account since November 1, 2009.

“I think I am being given a message, almost like someone breathing in a phone,” Jónsdóttir said in a Twitter message.

The order also covers “subscriber account information” for Bradley Manning, the U.S. Army private charged with leaking classified information; Wikileaks volunteer Jacob Appelbaum; Dutch hacker and XS4ALL Internet provider co-founder Rop Gonggrijp; and Wikileaks editor Julian Assange.

Appelbaum, who gave a keynote speech at a hacker conference last summer on behalf of the document-leaking organization and is currently in Iceland, said he plans to fight the request in a U.S. court. Appelbaum, a U.S. citizen who’s a developer for the Tor Project, has been briefly detained at the border and people in his address book have been hassled at airports.

The U.S. government began an criminal investigation of Wikileaks and Assange last July after the Web site began releasing what would become a deluge of confidential military and State Department files. In November, Attorney General Eric Holder said that the probe is “ongoing”, and a few weeks later an attorney for Assange said he had been told that a grand jury had been empaneled in Alexandria, Va.

The order sent to Twitter initially was signed under seal by U.S. Magistrate Judge Theresa Buchanan in Alexandria, Va. on December 14, and gave the social networking site three days to comply. But last Wednesday, she decided that it should be unsealed and said that Twitter is now authorized to “disclose that order to its subscribers and customers”, presumably so they could choose to oppose it. (Salon.com posted a copy of the documents (PDF) last Friday.)

A wide-ranging court order
Buchanan’s order isn’t a traditional subpoena. Rather, it’s what’s known as a 2703(d) order, which allows police to obtain certain records from a Web site or Internet provider if they are “relevant and material to an ongoing criminal investigation.”

The 2703(d) order is broad. It requests any “contact information” associated with the accounts from November 1, 2009 to the present, “connection records, or records of session times and durations,” and “records of user activity for any connections made to or from the account,” including Internet addresses used.

It requests “all records” and “correspondence” relating to those accounts, which appears to be broad enough to sweep in the content of messages such as direct messages sent through Twitter or tweets from a non-public account. That could allow the account holders to claim that the 2703(d) order is unconstitutional. (One federal appeals court recently ruled that under the Fourth Amendment, a 2703(d) order is insufficient for the contents of communications and search warrant is needed, although that decision is not binding in Virginia or San Francisco.)

A Twitter representative declined to comment on any specific legal requests, but told ZDNet Asia’s sister site CNET: “To help users protect their rights, it’s our policy to notify users about law enforcement and governmental requests for their information, unless we are prevented by law from doing so.”

Buchanan’s original order from last month directed Twitter not to disclose “the existence of the investigation” to anyone, but that gag order was lifted this week. Twitter’s law enforcement guidelines say “our policy is to notify users of requests for their information prior to disclosure”.

It’s unclear why Buchanan changed her mind. Twitter didn’t immediately respond to questions, but the most likely scenario is that its attorneys objected to the 2703(d) order on grounds that the law required that account holders be notified, and that the broad gag order was not contemplated by Congress when creating (d) orders in 1986 and could run afoul of the First Amendment.

Also unclear is how long Twitter stores full IP addresses in its logs; Google, for instance, performs a partial anonymization after six months.

Jónsdóttir was a close ally of Assange and supported efforts to turn the small north Atlantic nation into a virtual data haven. A New Yorker profile last year, for instance, depicted Jónsdóttir as almost an accidental politician whose self-described political views are mostly anarchist and who volunteered with Wikileaks.

At one point, the profile recounted, Assange was unshaven and his hair was a mess: “He was typing up a press release. Jonsdottir came by to help, and he asked her, ‘Can’t you cut my hair while I’m doing this?’ Jonsdottir walked over to the sink and made tea. Assange kept on typing, and after a few minutes she reluctantly began to trim his hair.”

Jónsdóttir even invited Assange to a reception — this was before last year’s series of high-profile releases — held at the U.S. ambassador’s residence in the capital of Reykjavik. “He certainly had fun at the party,” Jónsdóttir told the U.K. Telegraph. “He went as my guest. I said it would be a bit of a prank to take him and see if they knew who he was. I don’t think they had any idea.”

But after Assange became embroiled in allegations of sexual assault, which have led to the Swedish government attempting to extradite him from the U.K., Jónsdóttir said the organization should find a spokesman who’s not such a controversial figure.

“Wikileaks should have spokespeople that are conservative and not strong persons, rather dull, so to speak, so that the message will be delivered without the messenger getting all the attention,” Jónsdóttir said at the time. Although she said she did not believe the allegations, she suggested that Assange step aside, which he did not do.

In a blog post, Gonggrijp disclosed the e-mail that Twitter sent him, which said: “Please be advised that Twitter will respond to this request in 10 days from the date of this notice unless we receive notice from you that a motion to quash the legal process has been filed or that this matter has been otherwise resolved.”

Gonggrijp noted that the Justice Department misspelled his name, and speculated that other Web companies and e-mail providers may have received similar requests and quietly complied. “It appears that Twitter, as a matter of policy, does the right thing in wanting to inform their users when one of these comes in,” he said.

This article was first published as a blog post on CNET News.

Facebook will have to open up the books

There are a lot of rumors and speculation afoot about Facebook’s US$500 million funding round led by Goldman Sachs and the investment bank’s subsequent private offering of Facebook stock to deep-pocketed clients–more speculation, in fact, than there usually is around news pertaining to Facebook.

But one thing stands out as fact: should Facebook hit the threshold of 500 individual shareholders, it will be required to either start trading publicly or at least begin disclosing its financial information, according to rules set by the U.S. Securities and Exchange Commission (SEC). At the end of the fiscal year in which it reaches this milestone, it has 120 days to comply.

A number of news outlets claim to have seen a set of documents distributed by Facebook to prospective members of the elite Goldman investor bunch, and the documents assert that Facebook will surpass that 500-shareholder margin in this fiscal year, which jibes with the calendar year for Facebook. Basically, the company is acknowledging that about 120 days into next year–by April 2012–it’ll be forced to open up the books.

What’s less clear is whether it would disclose its financials while remaining privately held, or actually go public. A Facebook IPO has been talked about for years, but investors and executives at the company have repeatedly fanned away rumors and have said there would be nothing until 2012 at the very earliest.

Facebook founder and CEO Mark Zuckerberg has been notably hesitant to go public. Much of this has been about preserving the culture of Facebook’s early days, and Zuckerberg’s belief that the feel of a start-up is preferable to that of a huge company–US$50 billion valuation be damned.

Facebook has managed to keep its shareholder numbers low because of a 2008 change to its stock structure in which new employees were given stock that would not vest until the company went public–which does not require that stock to count toward the tally approaching 500. But early employees and investors have been trading Facebook stock so actively on private markets, even before the massive Goldman Sachs deal, that it had already piqued the interest of the SEC.

This article was fist published as a blog post on CNET News.

Amazon revamps cloud support

Amazon Web Services has introduced two new plans and cut prices for user support.

“We’ve added new Bronze and Platinum plans, reduced our prices, and increased our responsiveness,” Amazon Web Services (AWS) lead web services evangelist Jeff Barr wrote in a blog post on Thursday announcing the move. “As we grow we have become more efficient.”

The Bronze plan, aimed at individual users, gives customers a guarantee of response to filed queries–known as trouble tickets–regarding AWS APIs and AWS infrastructure within 12 business hours for normal queries and one day for low-priority tickets. Pricing is US$49 per month.

Read more of “Amazon revamps cloud support” at ZDNet UK.

Social media gains inroad to APAC firms

No longer restricted as a consumer tool, social media will gain more relevance for Asia-Pacific companies in 2011 as organizations turn to these tools for marketing and recruitment purposes, said an analyst firm.

In a Wednesday prediction of 2011 IT priorities, XMG noted that social media is no longer a fad. In fact, social networks have changed the way people live as well as how they interact with businesses, said the report.

According to XMG, social media presence for Asia-Pacific companies will increase in 2011, with more than 30 percent of small and midsize businesses (SMBs) expected to use social networks for promotional purposes by year-end.

However, adoption of social media channels among large corporations in the region will be slower compared to their counterparts in North America and Europe. The region’s enterprises are more traditional and conservative, it added.

Instead of jumping on the bandwagon, large Asia-Pacific companies will invest in research studies to understand the potential of social media channels before fully integrating them in their longer-term enterprise architecture, explained XMG.

XMG’s findings echoed that of a Burson-Marsteller report released last year. The public relations agency found that 79 percent of global companies use social media platforms while only 40 percent of top Asia companies have a corporate social presence.

Apart from marketing purposes, social media use for scouting talent will increase as it has been proven to be a “cost effective” tool, said XMG. With Gen Y employees making up more than half of the workforce in Asia, the role of social media officers will evolve to become more mainstream human resource (HR) practitioners to reach out to employees and to monitor the company’s social culture, added the report.

Cloudy outlook
Aside from social media, XMG noted that adoption of cloud computing is on the raise in the Asia-Pacific region. Cloud computing is a “bright spot” in the IT services sector, growing at five times the rate of the IT industry at a compound annual growth rate of 26 percent from 2010 to 2012, said the report.

According to XMG, cloud service providers need to take note that integrating existing and cloud applications is “critical” as the region’s companies have a mindset of leveraging and protecting their investments.

Education, customer experience and product bundling will be important in driving the uptake of cloud computing, it added.

Could Skype, other VoIP get blocked in China?

Skype still operates in China, and the Chinese government has not indicated publicly that it intends to ban the service there.

Yet a week after media sources in China and in the West erroneously reported that China had begun blocking Skype, rumors continue to surface that the software, which enables users to make phone calls via the Internet, will be banned. At a time when Twitter overflows with posts about the controversy, ZDNet Asia’s sister site CNET received a tip that China’s government was testing ways to block Skype and that officials would announce a ban next week.

A spokesman for Skype declined to comment.

CNET could not independently confirm the tip but the overall situation illustrates just how much uncertainty surrounds VoIP (voice over Internet Protocol) technology in China. The speculation can be traced to two events. First, China’s Ministry of Industry and Information Technology (MIIT) posted a notice to its Web site Dec. 10 that said the government was working to “launch an effort to strike against illegal” VoIP services, according to a story in The Wall Street Journal.

MIIT did not identify any VoIP service by name or mention when it would “strike.”

Second, something that may have also fueled some of the rumors is that shortly after MIIT posted its note, Skype suffered a worldwide outage. Some people in China apparently didn’t know a software glitch was the reason they couldn’t access the service and blamed China’s government.

Skype, which has plans to raise up to US$100 million through an initial public offering, operates in China with the help of TOM Online, a Hong Kong-based media company. TOM Online insists that TOM-Skype operates legally. On Monday, the Journal reported that the service has reported no problems or given indication of any blocks by China. Some pundits, however, worry that China’s government is interested in protecting state-owned telecommunications companies and their VoIP operations and that they perceive Skype and similar services as a threat.

This article was first published as a blog post on CNET News.

Demand forces Goldman to end Facebook solicitation

Well, that didn’t last very long. In case you still had lingering doubts about the investing class’s appetite to buy a piece of Facebook, follow the money to find out the answer: the Wall Street Journal reported early Thursday that Goldman Sachs was so flooded with demand for its recent investment solicitation to investors seeking to buy into the social-networking site, that it has decided to stop taking new orders, according to sources familiar with the situation.

Earlier this week, word leaked that Goldman had ponied up US$450 millionDigital Sky Technologies of Russia, a partner in the deal, accounted for another US$50 million–to acquire a position in Facebook. That paved the way for Goldman to invite clients willing to invest at least US$2 million to buy equity in Facebook, which is still private.

Goldman is likely to reap a fortune from fees resulting from any Facebook stock sales. Investors must pay 4 percent upfront frees to the investment firm as well as 5 percent on any future gains, according to the Journal.

The creation of an investment vehicle has reportedly spurred an Securities and Exchange Commission (SEC) inquiry of disclosure rules governing deals where investors are able to buy shares of private companies. Were it a public company, Facebook would be valued at around US$50 billion, according to estimates.

Current regulations require companies with 500 or more shareholders to publicly report financial information. Facebook’s deal with Goldman Sachs creates a special fund that allows the social network site to stay under that threshold even though some investors will be able to buy up to US$1.5 billion in Facebook shares. The SEC declined to comment.

Goldman’s investment underscored Wall Street’s endorsement of Facebook’s potential to make money in online social networking. Facebook is considered to be twice the value of Yahoo and about equal to what well-established names such as Boeing and Kraft Foods are worth on the open market. What’s more, the cash infusion buys time for Facebook to keep its books private and not have to worry about the vagaries of the market.

This article was first published as a blog post on CNET News.

IDC: Asia’s security software market to thrive in 2011

The Asia-Pacific security software market is poised for healthy growth this year, a new report has revealed.

Spending on security software in the Asia-Pacific excluding Japan region in 2011 will reach US$1.75 billion and post strong double-digit growth in most economies, according to an IDC statement released Tuesday. This includes expenditure on secure content and threat management (SCTM), security and vulnerability management (SVM), and identity and access management (IAM) products.

The market analyst expects the market to further grow to around US$2.4 billion by 2014, with SCTM forming the bulk of security software purchases.

IDC attributed the rise in security software spend to increasingly sophisticated threats and management overheads each organization and user face. Web applications, it added, also lead to information leakage risks as an overwhelming volume of personal and confidential information interact and collaborate together via Web applications.

“The security landscape has been constantly seeing new threats growing exponentially in terms of complexity,” said Marco Lam, research analyst for security research at IDC Asia-Pacific, in the statement. “The malicious attacks included exploiting the vulnerabilities of applications and operating systems, insider sabotages and purloining, identity fraud and unauthorized access to corporate systems and networks. The ways of committing these misdeeds are ever changing.”

IDC forecasts the SVM and IAM markets will be the two fastest growing security software segments across the region as companies seek to reduce complexity and increase management efficiency as well as generate better reports for monitoring their security posture and audits.

The need for regulatory compliance will also drive the SVM segment, it said. The segment is set enjoy the biggest growth in 2011, going up 13 percent year-on-year to hit US$141.8 million.

Spending on IAM is predicted to jump 12.5 percent over 2010 to reach US$371.5 million this year, driven by a continued focus on remote access, IDC added. Another contributing factor is the astonishing growth of social media usage which places a burden on a company’s IT resources allocation–organizations will seek to better handle this dynamic using proper identity access management.

The SCTM market, according to IDC, will achieve year-on-year growth of 10.5 percent to register US$1.2 billion.

Moving forward, IDC expects companies to tap cloud computing for controlling management overheads in the security software market with businesses adopting security-as-a-service for automation and centralized management.

The growing number of data privacy ordinances in the region will also lead to a greater number of organizations implementing more security software to protect their data and rights, it added.

Amazon: On track for $100 billion in revenue in 2015

Amazon is likely to hit US$100 billion in annual revenue and is on a growth path that eclipses the world’s most successful retailer–Wal-Mart.

That revenue projection comes from Morgan Stanley analyst Scott Devitt. In a large research report that emphasized that Amazon has plenty of runway left for growth, Devitt made the following points:

  • Amazon can fuel growth just by taking wallet share from its existing customers. Amazon’s 121 million customers spend about US$275 a year. Wal-Mart’s 300 million customers spend US$750 a year excluding groceries and Sam’s Club.
  • International expansion continues.
  • New efforts such as Amazon Web Services and digital sales via the Kindle platform are promising.
  • Subscription e-commerce for grocery staples is another promising avenue. I’ve been experimenting with subscription groceries for things like tea and cereal. Overall, Amazon’s pricing needs to come down a bit vs. Wal-Mart–based on my informal Dignan Go Lean and Frosted Flakes cereal index–but scale should help that.

Read more of “Amazon: On track for $100 billion in revenue in 2015” at ZDNet.

Big media fails to turn ISPs into copyright cops

Last month marked the second anniversary since the Recording Industry Association of America, the trade group representing the four largest music labels, stopped filing copyright lawsuits against people suspected of illegal file sharing.

At the time, the RIAA said it would seek help in copyright enforcement efforts from Internet service providers, the Web’s gatekeepers, which are uniquely positioned to act as copyright cops. Under a proposed RIAA plan, the ISPs would first issue warning letters and gradually increase pressure on customers who illegally shared songs, and even suspend or permanently terminate service for repeat offenders. RIAA execs said then that some ISPs were weeks away from announcing the adoption of what they called a “graduated response” program.

Two years later, we’re still waiting. Not only have the largest ISPs declined to cut off accused file-sharing customers but one ISP, Time Warner Cable, did more than anyone to derail a litigation effort launched this year against file sharers by independent and adult-film studios. An RIAA representative declined to comment for this story

Instead of befriending the entertainment industry on copyright issues, the major bandwidth providers appear to be a foe. The top ISPs have also conspicuously failed to support an antipiracy bill introduced in the U.S. Senate–and backed by the major entertainment sectors–late last year. If passed, the Combating Online Infringement and Counterfeits Act would authorize the government to shut down U.S. Web sites suspected of piracy as well as order ISPs to block access to similar sites overseas. Two ISP execs, who spoke to ZDNet Asia’s sister site CNET on condition of anonymity, were dismissive of the legislation and are skeptical it will pass.

Comcast as content owner
Executives from entertainment companies brush all the bad news aside. They say the same thing they’ve said for two years: Just wait. They say there’s a big announcement from some of the major ISPs coming around the corner. This time, they might be right about at least one ISP.

A year ago, Comcast announced it would pay US$30 billion to acquire NBC Universal, parent company of one of the six largest Hollywood film studios and major TV networks. The deal for NBC Universal, home of “The Bourne Identity”, and TV show “30 Rock”, was supposed to be completed by the end of 2010 but it continues to draw scrutiny from regulators.

Nonetheless, the acquisition is expected to go through this year and that means Comcast has bet big on content. Film industry insiders say Comcast, with nearly 17 million high-speed Internet customers, has indicated it plans to get tough on piracy, though nobody seems to know what that means. A Comcast representative wasn’t immediately available for comment.

Beyond Comcast, however, there still appears to be little appetite for a get-tough-on-piracy attitude at competitors such as AT&T and Time Warner Cable. The reasons the RIAA and the Motion Picture Association of America, which has also pitched the ISPs on a graduated response, have failed to sell the program to the ISPs are varied: the ISPs don’t want to alienate customers, the costs of implementing the plan are potentially high. It boils down to there are very few benefits for ISPs if they fight piracy and few consequences if they don’t, insiders say.

The bandwidth providers are bigger than their entertainment counterparts, wield more power in Washington, and much of the public supports them on this issue. And the big ISPs are more than capable of pushing back on the entertainment companies.

Underestimating ISP power
Evan Stone, the Dallas-based lawyer who this year began filing copyright suits against suspected film pirates on behalf of makers of adult movies, learned this the hard way. After filing a lawsuit last year on behalf of Larry Flynt Publications, the adult-entertainment empire that includes Hustler magazine, Stone needed to obtain the names of thousands of suspected film pirates. He first retrieved the Internet protocol addresses of the suspected film pirates. Stone then asked ISPs to identify the owners of the IP addresses.

Time Warner Cable informed him the company would hand over the names of only 10 subscribers a month. Stone seethed. “If you’re a pirate in these times,” Stone told CNET, “TWC is the ISP to have.”

But that was only the start of his troubles. When he pressed TWC to hand over more names, Larry Flynt Publications cut ties with Stone. While it’s hard to connect the dots, there’s no doubt TWC had leverage since some of Flynt’s movies are distributed over TWC’s channels.

Stone “wanted us to put pressure on the cable operators, but it’s not our goal to go after them”, Michael Klein, Larry Flynt Publications’ president, told AVN, a publication that covers the adult-film sector.

In a similar case, TWC offered to provide a minimum of 28 names to Dunlap, Grubb & Weaver, the law firm that represents a dozen independent film companies that filed copyright complaints against thousands of alleged film pirates this year. At the rate TWC proposed, it would take years for the filmmakers to obtain all the defendants’ names. The courts overseeing the cases signaled they wouldn’t hold up the legal process for this. As a result, Dunlap was forced to drop thousands of defendants from one of the complaints.

Meanwhile, Stone said other major ISPs have resisted helping him discover names. He said his other adult-film clients are prepared to take the ISPs to court and that the Digital Millennium Copyright Act is on their side. The DMCA requires ISPs to take specific action with regard to repeat copyright infringement committed by their customers or else lose protection from liability under the law’s safe harbor provision.

So, do ISPs need the big film studios and music labels or is it the other way around? Some regional ISPs have booted accused film and music pirates off their networks. One of the more aggressive appears to be Qwest. But similar to Comcast’s situation, Qwest has a financial stake in the entertainment industry. The company, which operates in 14 western states, is owned by tycoon Philip Anschutz, an investor in the movie “The Chronicles of Narnia”, and owner of the Regal Entertainment Group, the largest theater chain in the world.

Self-interest, it seems, is perhaps the only way to get an ISP to play cops and robbers with its own customers.

This article was first published as a blog post on CNET News.

Social bookmarking appeal remains strong

The future of social bookmarking is thought to be bleak, due in part to the uncertain future of pioneer site Delicious, note a couple of industry watchers. Others, though, argued that the fundamental concept of social bookmarking will preserve its relevance in a Facebook- and Twitter-dominated Web 2.0 world.

Jeremy Woolf, senior vice president of public relations firm Text 100, defined social bookmarking as a way of storing, organizing and sharing bookmarks, or links and references to any online content that a person finds interesting or useful. Such content can be in the form of Web pages, images or video and audio files, he added.

The bookmarking service was one of the first Web 2.0 tools to become popular because it made grouping and sharing Web information easy for users, Woolf noted in his e-mail.

But its relevance appear to be waning though, said Craig Skinner, senior consultant at Ovum. He was referring to Yahoo’s decision to axe Delicious–one of the earliest social bookmarking sites which was acquired by the Web company in 2005–last month. A subsequent report stated that the company will not shutter but be annexed to another company.

Elaborating on his point, the Ovum analyst said that the two distinct uses of social bookmarking–discovering new content and organizing it–have been increasingly replaced by alternatives that are fast gaining popularity among Net users.

For instance, people are increasingly turning to social networking sites such as Twitter and Facebook to discover new information in real-time, and wikis to curate information, Skinner observed.

Woolf concurred. “The rise of real-time social networks, advanced search engine algorithms and social search have [raised] questions over the future of social bookmarking.”

“If you can get great links, qualified either by your social graph, other users or powerful algorithms, why rely on a static link store?” he pointed out.

Evolving to remain relevant
That said, the concept of social bookmarking is still inherently useful today, according to Ben Cavender, associate principal at China Market Research Group (CMR). Without being overwhelmed by the vast amount of online content, a Web user can easily locate the information he wants and trusts, because other individuals with similar tastes and needs would have bookmarked and categorized such content, he explained.

Woolf, too, did not think that social networking sites will kill off social bookmarking services. Rather, he reckons social bookmarking is evolving with the technologies used in social media such as social algorithms, real-time updates and location-based services, adding that the aggregation of links will increase as the ability to tag and comment on content gets easier.

Users will also be able to filter content to meet their specific need. “For example, if you are looking for Japanese ramen in Hong Kong, you can search qualified bookmarks using filters such as social graph, demographic, influence of the commenters and so on,” the executive explained.

Woolf even argued that that there is a business case for social bookmarking, which primarily serves consumer needs. He explained that as Generation Y workers bring their “tagging and sharing” behavior–cultivated from using social bookmarks–into the enterprise, filters and tags based on preferences, opinions and even project requirements will be used on company information and resources online or on the intranet.

In addition, Woolf noted that as companies move to less structured data storage systems, this ability to tag and share will become a critically important means of saving and finding data. “This is social bookmarking–but not as we knew it [before],” he concluded.

Jake Wengroff, global director of social media strategy and research at Frost & Sullivan, shared Woolf’s sentiments. He said that social bookmarking remains relevant as people will always want to share information.

However, he noted that social bookmarking sites are no longer strictly just that. Sites such as Digg and Reddit have since morphed into news sites, rather than remain as link aggregators, Wengroff pointed out.

In fact, Keval Desai, Digg‘s vice president of product, stressed in a phone interview that Digg does not consider itself to be a social bookmarking site. He instead sees the site as a community-driven news platform, where members can engage in conversation and share opinions over stories that are of interest to them.

Desai also disagreed that there is competition from the likes of Facebook and Twitter, saying that Digg is a “natural partner” with social networking sites. He pointed out that because Digg is platform agnostic, a person can sign in using his preferred social network account and share information that he thinks is relevant to the contacts of that social network.

Maciej Ceglowski, founder of Pinboard, similarly took pains to differentiate his site from being just another social bookmarking service. Over e-mail, he described Pinboard to be a personal archive where people can store the things they find online and add labels and descriptions to their discoveries.

Ceglowski added out that while social networking sites like Facebook and Twitter are “ephemeral” and “focus on interaction in the present”, Pinboard is about collecting content that users would appreciate having around for many years to come.

Facebook beats Google as most visited in 2010

Facebook beats Google

Facebook passed Google as the most visited website in the US in 2010, according to a survey by the web tracking firm Experian Hitwise.

The social networking site also claimed the top search term of the year, with variations on its name filling four of the 10 most popular searches, the survey found. In all, Facebook searches accounted for 3.48 percent of all web searches in the US in 2010, a 207-percent increase over 2009.

The study found that Facebook accounted for 8.93 cent of all US website visits in the year, ahead of Google.com’s 7.19 percent and third-placed Yahoo Mail with 3.85 percent.

However if all Google’s various properties are taken into account, the web search giant did overtake Facebook with 9.85 percent of all website visits. Microsoft’s msn.com and bing.com also made it into the list of top ten websites, as did myspace.com.

Other terms in the top 10 searches included “youtube”, “craigslist”, “myspace”, “ebay” and “yahoo”.

Google said to be mulling digital newsstand

Think of it as an old-fashion circulation war in the digital age, substituting tablets for tabloids.

Google is trying to raise support from newspaper and magazine publishers for its own digital newsstand for Android-powered devices, according to a Wall Street Journal report, a venture that would ramp up its competition with similar efforts backed by Apple and Amazon.com. The digital newsstand would reportedly feature apps from publishers that would allow versions of their content to appear on devices running Google’s mobile operating system, according to the report, which cited anonymous sources.

However, media executives said the details and timing remain vague and the venture might never materialize. It’s also unknown how Google would address the presence of those partner publishers’ content in its Google News section in order to add value to the e-newsstand.

Google representatives did not immediately return a request for comment.

The Web titan recently launched its Google eBookstore to peddle electronic versions of books, jumping into a hot market dominated by similar enterprises launched by Apple and Amazon. Competition between Apple’s iTunes Store and Amazon’s Kindle Store for mobile newshounds has been heating up. In an effort to secure its piece of the e-newsstand pie, Amazon recently announced plans to give newspaper and magazine publishers a greater share of the revenue it collects for periodicals sold through the Kindle Store. Meanwhile, Apple was rumored to be working with News Corp. to create a digital newspaper exclusively for iPads.

Google CEO Eric Schmidt has long had an eye on newspaper content. Even though his company’s Google News has been publicly derided as a leech of the newspaper business, Schmidt told a group of newspaper editors last April that he believes newspapers can make money online.

“We have a business model problem; we don’t have a news problem,” Schmidt said at the time. “We’re all in this together.”

Google has reportedly told publishers that it would take a smaller cut of revenue than the 30 percent that Apple takes from iTunes sales.

This article was first published as a blog post on CNET News.

Google’s 2010 report card and 3 new resolutions

As another year dawns, life is still pretty good for Google but ever more complicated.

With that, let’s reexamine the five New Year’s resolutions ZDNet Asia’s sister site CNET News outlined for Google at the start of 2010 to see how the company lived up to that unsolicited advice, and offer more of the same for 2011.

First, last year’s report card:

1. Don’t forget where you came from: This resolution involved priority No. 1 at Google: remain the world’s leading provider of Internet searches by a comfortable margin. It passed this test with ease. Despite significant investment on Microsoft’s part into Bing, and Yahoo’s declaration that its back-end outsourcing strategy would lead to front-end breakthroughs, Google ended 2010 pretty much where it started, actually gaining a slight amount of market share according to ComScore’s November 2009 to November 2010 comparison.

2. Get control of the engineers: Google probably wishes it had paid a little more attention to this one. Two 2010 incidents involving Google engineers gone wild–the now-infamous Wi-Fi Street View case and the quieter (and creepier) firing of David Barksdale–showed that Google’s power to amass and organize vast amounts of data can be seductive to those with poor oversight or ulterior motives.

Google also stepped on its foot in launching Google Buzz with the assumption that users always wanted their most-frequently e-mailed contacts to also be their friends in a social-networking setting. Privacy training has been increased and Alma Whitten was tapped to put a public face on Google’s commitment to privacy, making it fair to say that keeping the trust of an increasing wary public in 2011 is essential to Google’s well-being.

3. Get HTML5 standards finalized: This one isn’t really Google’s fault, but its vision of the Web as the premier development platform of our time is still a ways off. Standards bodies are famously contemplative, but Google also struggled to prove its own case that the Web can be king by missing a deadline to ship a productive version of Chrome OS.

4. Live up to the promise of Google Books: Amazingly, the Google Books saga will drag on into yet another year as Google’s settlement with authors and publishers remains in legal limbo. By the end of the year Google did manage to launch its e-book store and release an interesting project on word usage over centuries, but is no closer to lifting the cloud of uncertainty over Google Books at the end of 2010 than it was at the beginning of the year.

5. Clarify your mobile strategy: Google definitely got the message on this one, scaling back its ambitious Nexus One project after it proved unpopular with both phone buyers and its business partners alike. Freed from such distractions, Android is now poised to grow even more in 2011 than it did over the past year as the iPhone alternative, and Google is about to make nearly US$1 billion a year on mobile advertising through Android and mobile search, it revealed toward the end of the year.

Here are three more things Google might want to think about in 2011.

Fight the government–and win
Google is at the point in its story arc where nearly everything it will do in 2011 will be scrutinized by some branch of the U.S. government, although it’s arguable it has already been there for years. Still, there’s little doubt the supervision is taking a toll and these concerns are already on the table in Europe.

The main problem–beyond the outcome of any potential regulation–is that larger start-ups aren’t going to be as interested in joining Google if they have to put their life on hold for six months while the government dithers over whether or not the deal is kosher. A great deal of Google’s success in 2010 came from larger acquisitions that might not have been approved if they were proposed in 2011, such as DoubleClick, AdMob, or YouTube.

Groupon, the darling of the daily deals department, was said to harbor such concerns as acquisition talks broke down between it and Google. AdMob was also reported to have sought an enormous “breakup fee” should its acquisition by Google have been squashed by federal regulators. At some point, doing business with those larger start-ups will stop making economic sense.

The hassle and distraction that a public government trial could present for Google executives is not exactly something to be welcomed. But at the same time, the uncertainty over what Google might and might not be allowed to do isn’t good for business either, and it also makes regulators look silly: either put your cards on the table and prove an unchecked Google is bad for the country or stop listening to whining from its competitors.

Google and the U.S. government are going to clash in a big way at some point: might as well break that ice in 2011.

Find your soul–and your scheduler
For many years, it was pretty simple to understand Google: it operated the best Internet search engine the world had yet seen, able to match quickly queries on virtually anything conceivable with relevant Web pages.

Google is so much more than that now. Search hasn’t gone away, but Google is increasingly a consumer software company, with products that are used in mobile phones, televisions, offices, and an ever-increasing array of gadgets.

One challenge highlighted by that growth is that Google needs to make prettier things. Google’s products in these markets tend to come off to average consumers as geeky and over-complicated, as even Google’s Andy Rubin, leader of the Android project, admitted late in 2010.

For some reason, Google’s Web design aesthetic–simple, uncluttered, and usable–doesn’t always surface in its consumer software products. It’s a little unfair to compare Google directly to Apple in this regard, since Apple has so much more control over how iOS software is presented to the end user, but fairly or unfairly, that’s the benchmark for mobile consumer software at the moment and Google doesn’t always measure up to that standard.

Also, while “launch and iterate” is a fabulous product development strategy for the Web–where subtle changes can be made extremely quickly and your customers pay nothing for the experience–it doesn’t always work in consumer electronics. The initial experience needs to be right–or at least not awful–the first time the buyer uses the product or negative associations start to set in no matter how quickly a patch is released.

Google therefore needs to release beefier versions of its software more consistently to give users and partners a chance to catch their breath. For example, the dizzying pace of Android development has been great for consumers and phone makers in one sense but can also cause confusion regarding which version of Android runs the fancy whiz-bang app that was just advertised by Verizon, and when their phone maker might approve that version for their device. Likewise, a more fully baked Google TV might have prevented some of early criticism of the software.

Be social or change the playing field
Few companies are really trying to compete against Google in Internet search these days. Instead, those bent on capturing eyeballs and advertising dollars on the Web are organizing their users in social groups, building Web versions of coffee shops and night clubs where people enjoy spending time and learning about new things from their friends as opposed to building the libraries people need for research purposes but would rather not wind up on a Saturday night.

Google is clearly aware of this trend but has little to show for efforts in 2010 to be more social. The Web is not a zero-sum game: people will always turn to the search box for things they can’t or would rather not ask their friends, but they’ll also ask their group of Web contacts for information about a lot of things that Google’s bots can’t quite duplicate, like whether or not the boutique on the corner has something that matches the colors in my living room, or that the one bar on the corner has a bartender who went to college with my sister and can totally hook us up with free drinks.

Google needs to figure out a way to get people to share that kind of information on its domain or convince Facebook and its users to open much of that information to its search bots. It might be easier to do just enough in social to keep Facebook on its toes while getting busy developing the next Web organization matrix.

Just as social networking has started to reshape how information is collected and stored on the Internet, something will come along to reshape how social networking operates. If Google wants to be a Web influencer for decades it can’t miss out on that next development.

This article was first published as a blog post on CNET News.

China crackdown on porn shutters 60,000 sites

China claims to be making progress in its fight against Internet pornography.

More than 60,000 Web sites were shut down and about 350 million pieces of pornographic and indecent content were eliminated from the Internet in 2010, the country’s state-run Xinhua news agency reported last Thursday.

Police investigated 2,197 cases involving 4,965 people suspected of disseminating pornography via the Internet or cell phone in violation of China law, according to the report. Of those suspects, 58 received jail sentences of five or more years, according to the report.

Wang Chen, head of the State Council Information Office, heralded the campaign as successful and necessary.

“Our campaign has been a great success and this has not been achieved easily,” Wang said at a news conference, according to a Reuters report. “We have made the Internet environment much cleaner than before as there was a lot of pornography available.

“As long as there are people with bad motives who want to spread violent or pornographic information, we will have to continue our campaign to resolutely crack down on the spread of such information,” he said.

Police also confiscated more than 37 million pirated items, including DVDs and books, Xinhua reported.

China, which boasts the world’s largest Internet base with 450 million users, implemented new regulations in 2010 regarding cell phone users and Web site operators that were designed to aid police in their investigations. In September, China began requiring cell phone users (including foreign tourists) to provide identification when they set up a new account. And in February, the government announced that Web site operators will need to submit photographs of themselves and meet Internet service providers in person.

This article was first published as a blog post on CNET News.

Funding gives Facebook US$50B valuation

A new round of funding for Facebook has reportedly given the company a valuation of US$50 billion.

The social-networking giant raised $500 million through deals with investor Goldman Sachs and Digital Sky Technologies, a Russian investment firm that has already invested about $500 million in Facebook, according to a New York Times report. The report notes the investments give Facebook a greater value than Web pioneers Yahoo and eBay.

Facebook representatives declined to comment on the report.

The investments come as the U.S. Securities and Exchange Commission has reportedly begun scrutinizing the market for stock in hot, privately held companies such as Facebook, Twitter, LinkedIn, and Zynga. However, Facebook, has recently taken measures to curb second-market trading, barring current employees from selling stock.

While the investment could increase pressure on the company to go public, at least one of Facebook’s most high-profile investors has said that the Facebook does not plan on going public until 2012 at the earliest.

The investment also comes at a time of great interest in social media sites. Daily Deals site Groupon, which reportedly spurned a recent offer from Google worth US$6 billion, recently raised US$500 million toward a stated funding goal of US$950 million. Twitter recently closed a US$200 million investment led by Kleiner Perkins Caulfield & Byers that valued the company at US$3.7 billion.

This article was first published as a blog post on CNET News.

Chrome finishes 2010 with 10 percent share

With the steady rise in Chrome, 1 out of every 10 people surfing the Web in December used Google’s browser.

Chrome’s gains have come largely at the expense of Microsoft’s Internet Explorer, whose usage share has been dropping for years, but there’s also a ray of hope for Redmond. IE9, which embodies Microsoft’s ambition to build a cutting-edge browser once again, is showing signs of real adoption with usage that grew from 0.4 percent in November to 0.5 percent in December, according to new statistics from Net Applications.

Fractions of a percent may sound insignificant, but with hundreds of millions of people using the Web, they actually represent a large number of real users. And in the current competitive market, browser makers are attuned to where the growth is occurring.

For months now, Chrome has risen. Most recently, it rose from 9.3 percent in November statistics to 10 percent in December, according to Net Applications. That’s helpful for Google’s ambition to speed up the Web overall; Chrome is a vehicle by which the company can explore, develop, and promote new features, such as Native Client, SPDY, WebP, and False Start, that Google hopes will speed the Web and make it a more powerful foundation for applications.

Mozilla’s Firefox, the second-place browser, stayed flat at about 22.8 percent, Apple’s Safari rose from 5.6 percent to 5.9 percent, and Opera was flat at about 2.2 percent. Chrome and Safari grew at the expense of IE, which dropped from 58.4 percent to 57.1 percent.

Note that because browser usage overall is increasing, even percentages that remain flat from month to month still mean a growing user base.

Microsoft can take consolation that its share losses have come from older versions of its browser. IE6, an advanced browser when released nearly a decade ago but now despised among Web developers for retarding progress on the Web, dropped from 13.7 percent in November to 13.1 percent in December. IE7 dropped from 9.5 percent to 8.8 percent.

This article was first published as a blog post on CNET News.

Google eyes ‘cloaking’ as next antispam target

Those obsessed with where Google ranks their Web site have a new topic to mull over: cloaking.

Google’s Matt Cutts, in charge of much of the search giant’s antispam efforts, tweeted over the past week that Google plans to take a closer look at the practice of “cloaking”, or presenting one look to a Googlebot crawling one’s site while presenting another look to users. This can include “serving a page of HTML text to search engines, while showing a page of images or Flash to users”, according to Google’s Webmaster Central help pages, but Cutts implied that Google was looking beyond page content in its renewed emphasis on cloaking by suggesting that Webmasters “avoid different headers/redirects to Googlebot instead of users”.

As with just about any change that Google announces to its secret and powerful Web ranking recipe, Webmasters immediately started to freak out (to a certain extent) over what exactly Cutts meant in his tweet. Search Engine Land summed up some of the reaction, which initially appears to center on whether or not legitimate sites that are serving up rich media files will get caught up in a Google purge, or sites that present mobile-optimized content to those with mobile browsers will get punished.

Still, it’s rare for Cutts and Google to announce this type of algorithmic shift so publicly, which implies they’re giving Webmasters a warning shot in order to reexamine their sites before the ranking changes go into effect, and that rankings may be a little fluid as it rolls out.

This article was first published as a blog post on CNET News.

Online holiday shopping bumps up

The amount of money spent shopping online during the holiday season increased this year compared with last, say two recent reports, another sign of the Internet’s continuing permeation of American life.

SpendingPulse, a report from MasterCard, pegged the year-over-year rise at 15.4 percent. The report, released this week and covering the period from Oct. 31 to Dec. 24, looks at sales in the MasterCard payment network and combines those figures with survey-derived estimates of non-credit-card purchases.

According to the report, apparel sales led the field among e-commerce categories, a sign, perhaps, that shoppers are becoming more comfortable with buying clothing sight unseen. Electronics also made a showing, and jewelry managed to log an increase as well.

In general, the results show that the Web seems to be continuing on its way to becoming as American as apple pie–or the shopping mall. Though according to various sources online sales still make up only 10 percent of all purchases, that seems likely to change.

“Today e-commerce accounts for a much larger share of overall retail sales compared to a few years ago,” Michael McNamara, vice president for MasterCard Advisors SpendingPulse, said in a statement. “And during this holiday season, it registered double-digit growth for six out of seven weeks.”

The SpendingPulse report said that this year, the Monday after Thanksgiving saw US$999.3 million in e-commerce receipts, a 25.3 percent increase over that same day last year. And six days in this year’s holiday shopping season saw online sales of more than US$1 billion, compared with three days in 2009.

ComScore served up its own batch of figures this week, with its report covering Nov. 1 through Dec. 20 and based on surveys of consumers. The analytics company reported a 12 percent increase in e-commerce spending during that time frame versus the same frame last year.

In a statement, ComScore Chairman Gian Fulgoni said a 17 percent year-over-year rise in e-commerce receipts during the last weekend before Christmas “capped the heaviest online spending week of all time at US$5.5 billion”.

The company also singled out other significant dates:

  • Thanksgiving Day totals rose 28 percent over last year,
  • Cyber Monday (Nov. 29) logged a 16 percent rise,
  • Free Shipping Day (Dec. 17) saw a whopping 61 percent growth figure,
  • and Black Friday (Nov. 26) saw a 9 percent increase year over year.

A report last month from Coremetrics, which derives its data differently from ComScore, put the Black Friday figure at 16 percent. That report also pointed to the increasing importance of mobile devices and social-networking sites in the e-commerce cyberscape.

“We’re watching online retail, and increasingly social media and mobile, become the growth engines for retailers everywhere, as consumers embrace online shopping not only for its ease and convenience, but as a primary means of researching goods and services,” John Squire, Coremetrics’ chief strategy officer, said in a statement at the time.

This article was first published as a blog post on CNET News.

FCC Net neutrality rules reach mobile apps

Net neutrality advocates in Washington, U.S., have long insisted that eventual government regulations would be simple and easy to understand. Public Knowledge has called the Net neutrality concept “ridiculously simple”, and Free Press said the rules would be “clear” and easy to understand.

The Federal Communications Commission (FCC) finally released its long-expected regulations on Thursday, which it had previously approved on a 3-2 party line vote earlier this week, and they’re not exactly “ridiculously” simple. The rules and the related explanations total a whopping 194 pages (PDF).

One new item that was not previously disclosed: mobile wireless providers can’t block “applications that compete with the provider’s” own voice or video telephony services. By including that rule, the FCC effectively sided with Skype over wireless carriers.

A series of disputes erupted last year over whether Skype would be allowed on smartphones and over whether it was AT&T or Apple that was responsible for Google Voice not appearing in the iPhone‘s App Store. In October 2009, AT&T agreed to support VoIP (voice over Internet Protocol) applications such as Skype on its 3G network, and Google Voice appeared as an iPhone application last month.

The legality of “paid prioritization”, which previously was ambiguous, also has been cleared up. The concept means a broadband provider favoring some traffic over other traffic. That would mean Amazon.com can’t, theoretically, pay Comcast for its Web site to load faster than Barnes & Noble’s.

The FCC acknowledged there’s no evidence that “U.S. broadband providers currently engage in such arrangements”. But because any pay-for-priority deals would “represent a significant departure from historical” practice and potentially raise barriers-to-entry on the Internet, they should be outlawed.

That section of Thursday’s order, which has been championed by FCC Chairman Julius Genachowski, rejects arguments about paid prioritization that AT&T made earlier this year. As CNET reported at the time, AT&T noted it already had “hundreds” of customers who have paid extra for higher-priority services, and it argued that the Internet Engineering Task Force’s specifications explicitly permit the practice.

Genachowski had said during Tuesday’s vote that the rules would require all broadband providers including mobile services to disclose their network management practices, and that non-mobile providers would be prohibited from blocking and “unreasonably” discriminating against network traffic.

Other points that became public in yesterday’s order:

• Internet providers are allowed to block users from committing copyright infringement, “which has adverse consequences for the economy”, though the FCC intentionally left ambiguous the extent of this authority.

• Mobile providers, which are generally not the target of these rules, nevertheless can’t block access to “lawful” Web sites or “competing” services. That includes “a voice or video telephony service” provided by that carrier or a parent company.

• The definition of “reasonable” network management: “Appropriate and tailored to achieving a legitimate network management purpose, taking into account the particular network architecture and technology of the broadband Internet access service”.

• All broadband providers, including mobile wireless providers, must disclose their network practices. That includes “descriptions of congestion management practices; types of traffic subject to practices; purposes served by practices; practices’ effects on end users’ experience; criteria used in practices, such as indicators of congestion that trigger a practice, and the typical frequency of congestion; usage limits and the consequences of exceeding them; and references to engineering standards, where appropriate”.

• It also includes “whether and why the provider blocks or rate-controls specific protocols or protocol ports, modifies protocol fields in ways not prescribed by the protocol standard, or otherwise inhibits or favors certain applications or classes of applications”.

The FCC has been attacked on nearly all sides since its vote Tuesday, with pro-regulation groups like Free Press and Public Knowledge saying the order doesn’t go far enough, especially in terms of regulating wireless providers. That was echoed by FCC commissioner Michael Copps, a Democrat, saying he almost voted against the proposal because it “could–and should–have gone further”.

Robert McDowell, a Republican, dissented from the vote, saying the FCC did not have the legal authority to enact Internet regulations. The real effect, he predicted, would be: “Less investment. Less innovation. Increased business costs. Increased prices for consumers. Disadvantages to smaller ISPs. Jobs lost.”

The ultimate fate of the FCC’s order released yesterday is, of course, anything but certain.

In April, a federal appeals court unceremoniously slapped down the agency’s earlier attempt to impose Net neutrality penalties on Comcast after the company temporarily throttled some BitTorrent transfers.

And more than a few Republican members of Congress–including incoming House Speaker John Boehner–have slammed the FCC’s action as an illegal attempt to regulate the Internet. In the 2011 funding bill, they could prohibit the FCC from enforcing any such rules.

This article was first published as a blog post on CNET News.

Twitter acquires new personnel from Fluther

Twitter announced Tuesday that it has “acquired” the four engineers and a designer from a question-and-answer (Q&A) start-up called Fluther. Fluther won’t shut down, the two companies explained; though development on it will not continue, a community manager will continue to maintain it.

The Fluther product itself was not acquired by Twitter, making this a different kind of talent acquisition from those that Facebook has been making famous lately. Facebook has acquired companies like Drop.io and Hot Potato specifically for the engineers or product leaders behind them, but in those cases also acquired the software itself and, in most cases, shut it down.

The former Fluther engineers, it seems, will be involved in content discovery technology on Twitter.

“During our conversations with Fluther’s team, we were continually impressed by their technical talent, entrepreneurial spirit, and much of the thinking behind the question-and-answer product they’ve spent the last couple of years building,” a post on the Twitter blog explained. “When the Fluther team joins us they will focus on helping users discover the most relevant content on Twitter.”

This article was first published as a blog post on CNET News.

AOL buys personal landing page About.me

AOL has bought the social-networking aggregator about.me for an undisclosed sum, according to an announcement from the company.

AOL hopes that its purchase of About.me–co-founded by Tony Conrad, Ryan Freitas and Tim Young–will “enhance the consumer experience” of services such as AOL Mail and AIM with AOL-owned sites such Engadget and PopEater.

“Going forward, our business approach will also remain unchanged–startup-style, with the same hunger and spirit about.me was founded on…AOL is doing what great, sustainable business do every so often — they’re reinventing themselves,” Conrad wrote on his personal blog.

Read more of “AOL buys personal landing page About.me” at ZDNet UK.

Survey: Big charities have biggest Twitter power

One of the most talked-about uses for Twitter has been as a free marketing and outreach mouthpiece for nonprofits, particularly those with otherwise limited resources. But a new survey about the most “engaged” charity organizations and nonprofit foundations on Twitter–meaning how big a following a charity has as well as how much it interacts directly with followers and is talked about by other Twitter users–indicates that it’s still big, powerful nonprofits that have the most muscle in social media.

The survey was conducted by Empire Avenue, a start-up that translates social-network activity and following into a “share price” in a virtual stock market of online influence.

Most of the list is made up of nonprofits that already had global reach before social media came into the picture. The most “engaged” nonprofit on Twitter, the results found, is the United Nations’ Refugee Agency, which has 1.1 million followers. Second place on Empire Avenue’s list is the exception to the rule, Charity Water, a relatively small clean-water nonprofit that we’ve written about before for its status as a favorite of dot-com successes like Twitter co-founder Jack Dorsey and Bebo founder Michael Birch.

But the rest of the survey’s top 10 look far more like big, U.N.-backed charities than smaller ones fueled by dot-com thinkers. Antipoverty group CARE is in third place, followed by celebrity-bolstered HIV and AIDS charity RED and education organization Room to Read. In sixth place is the Bill & Melinda Gates Foundation, the Microsoft founder’s philanthropic venture; in seventh is UNICEF; then People for the Ethical Treatment of Animals (which has been known for some fairly creative social-media campaigns); and the CAA Foundation (the talent agency’s philanthropic arm). Rounding out the top 10 is Greenpeace, another organization that’s been known to craft edgier social-media strategies rather than simply tweet to followers.

Empire Avenue’s ranking encompasses 30 organizations in total, and most of them follow the lead of the top 10: big nonprofits that were well-known long before they made Twitter part of the strategy. A notable exception is in 27th place: Malaria No More, which was founded in 2006 and is chaired by Priceline co-founder Scott Case. Like Charity Water, Malaria No More has become closely associated with the dot-com elite, often working with influential Twitter users to get its #endmalaria hashtag distributed across the service’s web of chatter.

This article was first published as a blog post on CNET News.

Report: Google requests delay of new Google TVs

Google TV is apparently encountering a bit of static that has resulted in a programming change.

A number of TV manufacturers have been expected to unveil new Internet-ready TVs at the 2011 Consumer Electronics Show (CES) in Las Vegas next month. But Google has asked them to delay those plans so it can overhaul the Google TV software, according to a New York Times report that cited people familiar with the company’s plans.

Google representatives did not immediately respond to a request for comment.

The move comes less than a week after Google released an update to the software in an effort to make it more user friendly and improve Netflix integration. The Netflix experience on the previous software version was described by some reviewers as antiquated, and CNET’s Matthew Moskovciak went so far as to say that the software’s “Netflix app is about two generations behind those for competitors, such as Roku and Sony’s PS3.”

Google TV is one of the more high-profile attempts in recent history by the tech industry to marry the PC-based Internet and the traditional television world. Logitech and Sony have released devices running Google TV software, which allows people to watch regular old broadcast television while pulling up a series of Internet-based applications and Web sites.

However, Google TV has gotten off to a rocky start, and the search giant is still trying to get the big media companies to warm up to the software platform. So far, all of the major broadcast networks have blocked Google TV from providing access to their online content.

NBC, CBS, ABC, and Fox all block full episodes of their shows from appearing on the software platform. However, Google TV supporters note that the software is simply making the freely available content posted to the Web by broadcasters accessible on TV sets.

This article was first published as a blog post on CNET News.

Amazon names HP’s Rubinstein to board

Amazon said last week that it has elected Jonathan Rubinstein, head of Hewlett-Packard’s Palm unit, to its board of directors.

The addition of Rubinstein will bring a good bit of mobile device know-how to Amazon’s board. Rubinstein is an alum of Apple and revamped Palm via the Pre and WebOS. Now he has a key role in HP’s device strategy.

Rubinstein will get 5,000 shares of Amazon vesting over three years, according to a regulatory filing.

Read more of “Amazon names HP’s Rubinstein to board, brings periodicals to Android Kindle app” at ZDNet.

Assange legal case could hang on contradiction

A contradiction emerged last Friday over WikiLeaks’ relationship with one of its suspected sources, a dispute that could influence whether Julian Assange ultimately faces conspiracy charges in the United States.

The WikiLeaks editor, who was released from a London prison last Friday, denied knowing Bradley Manning, the Army private who is behind held in a military brig in Quantico, Va., on charges that include leaking classified material.

“I had never heard of the name Bradley Manning before it was published in the press,” Assange told ABC News. “WikiLeaks’ technology [was] designed from the very beginning to make sure that we never know the identities or names of people submitting us material.”

That contradicts a chat log that appears to show Manning’s conversations before his arrest–and before his name ever appeared in the media–in which he described having a close relationship with Assange as a confidential source.

Manning reportedly told ex-hacker Adrian Lamo that he had “developed a relationship with Assange” over many months, according to transcripts posted by BoingBoing and Wired.com over the summer. Lamo told ZDNet Asia’s sister site CNET that the transcripts were accurate, but that he doesn’t have the computer equipment on which it was saved because the FBI had taken it.

The details are crucial. Federal prosecutors are reportedly exploring filing conspiracy charges against Assange on the theory that he collaborated with Manning on transferring secret documents obtained from the Army’s internal computer network. (That would allow them to avoid charging him under the Espionage Act.)

Sweden is seeking Assange’s extradition from the U.K. to question him about alleged sex offenses. Assange was released on bail of 200,000 British pounds, or about US$316,000, and he will be under strict limits on his movements until a hearing on Jan. 11.

The U.S. appears to be intent on pursuing a parallel indictment, though no charges have become official. A State Department spokesman today said “the investigation into the leak of classified cables is ongoing” but would not provide details. One lawyer for Assange said early this week that a grand jury in Virginia had been convened, but another said last Thursday that was only a rumor.

Here’s one excerpt from the published logs that appears to show that when asked for unreleased information, Manning refused, saying he’d have to check with Assange:

(1:51:14 PM) Adrian Lamo: Anything unreleased?
(1:51:25 PM) Bradley Manning: i’d have to ask assange
(1:51:53 PM) Bradley Manning: i zerofilled the original
(1:51:54 PM) Adrian Lamo: why do you answer to him?
(1:52:29 PM) Bradley Manning: i dont… i just want the material out there… i dont want to be a part of it

This isn’t the first time that Assange may have misstated facts, or perhaps even lied, in an attempt to protect a source. In July, he denied having classified State Department cables, saying that if he did, “we would have released them”.

Four months later, WikiLeaks began slowly publishing the State Department dispatches. Approximately 1,618 of 251,000 have been released so far.

This article was first published as a blog post on CNET News.

Delicious to jump ship from Yahoo, not shutter

When a former Yahoo employee leaked a list of products that the troubled company plans to shut down, many people were up in arms over the fact that one of the items on the list was Delicious–a social-bookmarking company Yahoo acquired in 2005 that still has a handful of loyal users.

But Delicious says it plans to find an exit strategy from Yahoo, not shut down.

“We are not shutting down Delicious,” a post on the Delicious blog read. “While we have determined that there is not a strategic fit at Yahoo, we believe there is [an] ideal home for Delicious outside of the company where it can be resourced to the level where it can be competitive.”

The wording of the post does not make it clear as to whether Delicious was facing the threat of a shutdown or whether Yahoo’s plan had been to sell it all along. A handful of CEOs in the social-media business have publicly (and perhaps not seriously) posted blogs or tweets offering to buy Delicious from Yahoo, and there is at least one Twitter petition circulating on behalf of people who want it to be turned into an open source product.

The post explains that the Delicious team is “actively thinking about the future of Delicious”, and is “in the process of exploring a variety of options and talking to companies right now”.

Yahoo’s planned shutdown of about a half dozen products and consolidation of a few more was revealed on Thursday, when the founder of another possibly doomed product, MyBlogLog, posted a screenshot from an internal presentation to Twitter. The product closings came hand in hand with the layoffs of several hundred Yahoo employees, and few of them were surprising. But even some nonusers were dismayed over the news of the Delicious shutdown, given that the service is widely regarded as a great product, and its framework of social news and tagging was arguably visionary.

But Yahoo had put it on a back burner long ago. Founder Joshua Schachter left the company two years ago and then headed to a stint at Google.

This article was first published as a blog post on CNET News.

Google search results warn of compromised sites

Google has been warning Web surfers about sites that appear to be hosting malware in search results for years. Now, the company is adding a warning in search results when the site appears to be compromised but may not be actually downloading malware to visitors’ computers.

Google search users should now start seeing a new hyperlink warning that says “This site may be compromised,” adjacent to some results if Google’s system has detected something on the site that would indicate that it has been hacked or otherwise compromised. Clicking on the warning link leads to a Help Center article with more information.

“If a site has been hacked, it typically means that a third party has taken control of the site without the owner’s permission,” the article says. “Hackers may change the content of a page, add new links on a page, or add new pages to the site. The intent can include phishing (tricking users into sharing personal and credit card information) or spamming (violating search engine quality guidelines to rank pages more highly than they should rank).” Web surfers can also just click on the result to go directly to the site.

Google first started putting warnings next to results in late 2006, but focused on sites that were hosting or actively serving malware. Those warnings say “This site may harm your computer,” and clicking on the result itself takes you to another page that provides more information.

The new warning is designed to focus on Web sites that may not be actively infecting computers, but that may be compromised and conducting other types of attacks, such as spam or phishing.

Along with warning Web searchers, Google tries to notify Web masters when they detect that their site may be compromised via messages in the Google Webmaster tools console, Google said.

“Of course, we also understand that Webmasters may be concerned that these notices are impacting their traffic from search,” Google says in a post on the Webmaster Central blog last Friday. “Rest assured, once the problem has been fixed, the warning label will be automatically removed from our search results, usually in a matter of days. You can also request a review of your site to accelerate removal of the notice.”

This article was first published as a blog post on CNET News.

MasterCard willing to cut off pirate sites

MasterCard is willing to stop processing transactions from sites trafficking in pirated music, movies, games, and other digital copyrighted content.

Lobbyists working for MasterCard have told trade groups from the entertainment sector that the credit card company is supportive of The Combating Online Infringement and Counterfeits Act, an antipiracy bill introduced into the Senate last September, sources with knowledge of the talks tell ZDNet Asia’s sister site CNET.

Backed by U.S. Senator (Sen.) Patrick Leahy, chairman of the Senate Judiciary Committee, and committee member Sen. Orin Hatch, the bill would authorize the Department of Justice to shut down domain names of U.S.-based Web sites judged to be dealing in pirated content and also have the power to order Internet service providers (ISPs), payment processors, and online ad networks in the United States to cease doing business with overseas pirates sites. Opponents of the law say it will give the government sweeping powers to censor U.S. citizens.

Representatives from MasterCard, Visa, and American Express did not respond to interview requests.

When asked for a comment about the ongoing talks between MasterCard and the entertainment sector, the music industry’s trade group, the Recording Industry Association of America (RIAA), issued a statement from Mitch Glazier, executive vice president of government and industry relations.

“MasterCard in particular deserves credit for its proactive approach to addressing rogue Web sites that dupe consumers,” Glazier said. “They have reached out to us and others in the entertainment community to forge what we think will be a productive and effective partnership.”

The antipiracy strategy of large Hollywood studios and music labels is evolving and is now less about filing lawsuits against site operators and individual file sharers. Big media companies now seem intent on cutting off sources of income for illegal file-sharing and streaming sites. Many of these operations make money by posting ads from U.S. ad networks, including Google. They also charge for “premium services” such as larger storage capacity.

One of the sites the entertainment industry says offers access to unauthorized copies of films is Megaupload. To obtain a membership to the site, one can pay with PayPal, Visa, MasterCard, or American Express. There are certainly other ways for sites to accept payment than these and the entertainment industry knows this. They also know that many people are still leery of online transactions, even with stalwart payment methods.

The goal of the entertainment sector is to discourage as many people as possible from doing business with pirate sites.

To that end, the MPAA, RIAA, and other trade groups have pressured payment services, ad networks, and ISPs to do more piracy fighting. For some of these companies, Leahy’s bill seems to have helped spur people into action.

Two weeks ago, Google announced it would improve antipiracy efforts, including a promise to do more to keep Web sites that provide infringing materials out of AdSense, the company’s advertising program that pays Web sites for hosting ads.

In addition, the Interactive Advertising Bureau (IAB), the trade group representing 470 members that account for more than 86 percent of U.S. online advertising, says it also wants to work with the entertainment industry and lawmakers on cutting off pirates.

But according to Mike Zaneis, the IAB’s general counsel, the group wants to find a way to thwart “rogue sites” without harming the ad business. He said one thing that all the parties must understand is that serving ads online is complicated and that often a company serving ads has no idea where the ad will end up.

“There’s a commitment here to work with the content community and senators Hatch and Leahy,” Zaneis said. “We want to find the best option and do what everybody wants, which is to cut off funding to the rogue sites.”

This article was first published as a blog post on CNET News.

Julian Assange leaves London jail on bail

A beaming Julian Assange emerged from solitary confinement in London’s Wandsworth Prison yesterday and said he plans to continue his work as the most visible face of WikiLeaks.

The Australian programmer, computer hacker, and document-leaking evangelist told a scrum of journalists and supporters that “I hope to continue my work”, and insisted he was innocent of the odd sexual allegations that led Swedish authorities to seek his extradition.

“To the British justice system itself, where if justice is not always an outcome, at least it is not dead yet,” Assange said. (Here’s audio.)

On Thursday, Justice Duncan Ouseley of the Royal Courts of Justice in London rejected prosecutors’ efforts to keep Assange locked up while the extradition proceedings continue. Bail was set at 200,000 pounds, or about US$316,000, and Assange will be under strict limits on his movements until then.

For most of the day, it was unclear whether Assange would be released in time to make his evening curfew in the country manor known as Ellingham Hall, a few hours’ drive outside of London, that a supporter has made available.

The property is owned by British media pioneer Vaughan Smith, who told the U.K.’s Independent newspaper: “Having watched him give himself up last week to the British justice system, I took the decision that I would do whatever else it took to ensure that he is not denied his basic rights as a result of the anger of the powerful forces he has enraged.”

During his supervised release, Assange will be required to report to police every evening and be at Ellingham Hall daily for four hours during the day and four hours at night. His next court date has been scheduled for Jan. 11.

Assange is wanted in Sweden for “overraskningssex”, which his British lawyers say translates to “sexy by surprise.” One Swedish woman claims Assange had sex with her after a condom broke, and another accused him of having sex without one in the first place.

Meanwhile, the U.S. government is piecing together a case against him for publishing classified Army and State Department files.

An analysis by ZDNet Asia’s sister site CNET this week shows that Assange could be held liable under the Espionage Act, but that the 1917-era law itself could violate the First Amendment’s guarantee of freedom of the press. Justice Department prosecutors are attempting to build a conspiracy charge against Assange in hopes of avoiding some of the free speech problems, the New York Times reported yesterday.

Bradley Manning, the Army private accused of being a source for WikiLeaks, is being held in the U.S. Marine Corps brig in Quantico, Va., in “inhumane conditions,” according to a report at Salon.com.

This article was first published as a blog post on CNET News.

Facebook confirms outage amid new design rollout

Facebook apologized for a brief outage today, citing a technical glitch that prompted the company to take the site offline for a short while.

“For a brief period of time, some internal prototypes were made public to a number of people externally,” a Facebook spokesman said this afternoon when asked for comment. “As a result, we took the site down for a few minutes. It’s back up, and we apologize for the inconvenience.”

It was unclear how widespread the outage was, how long it lasted and if it was related to the rollout of new brand pages on the site. A Facebook spokeswoman provided this comment when asked additional questions: “We do not comment on future products and have nothing to announce at this time.”

Numerous reports surfaced on Twitter earlier today about a Facebook outage, though it appeared to remain accessible for some users.

Facebook tweeted this comment before issuing a statement: “Facebook is available again after being down for a brief period. We apologize for the inconvenience.”

This article was first published as a blog post on CNET News.

Yahoo slashing products like Delicious, MyBlogLog

Layoffs apparently aren’t the only thing Yahoo is doing to slim down size and cut expenses: A screenshot from a company Webcast began circulating Thursday that claims the company will be shutting down Yahoo Buzz, MyBlogLog, Delicious, AllTheWeb.com, Yahoo Picks, and AltaVista; as well as merging and consolidating a handful of other products like geolocation service Fire Eagle and event listing site Upcoming.

The screenshot was originally posted to Twitter by Eric Marcoullier, a former Yahoo employee who had been the founder of MyBlogLog, a Yahoo acquisition that will now be shuttered.

“Part of our organizational streamlining involves cutting our investment in underperforming or off-strategy products to put better focus on our core strengths and fund new innovation in the next year and beyond,” a statement provided by Yahoo read. “We continuously evaluate and prioritize our portfolio of products and services, and do plan to shut down some products in the coming months such as Yahoo Buzz, our Traffic APIs, and others. We will communicate specific plans when appropriate.”

Closing small products, many of them acquisitions in the first place, to cut costs is akin to Google’s announcement in early 2009 of a bulk product shutdown that saw the death of Dodgeball, Jaiku, Notebook, and Catalog Search. But it’s a significantly more dire situation at Yahoo, which has been troubled for years now. Yahoo’s layoffs, announced Tuesday, cut four percent of the company’s global head count as it continues to struggle for a turnaround under the leadership of CEO Carol Bartz.

Many of the Yahoo products being shut down are social-media apps that are long-shot rivals to offerings from the likes of Facebook, Google, Digg, and Foursquare. This year, Yahoo bought a number of products like Associated Content and launched services like Yahoo Deals that further its attempts to be a media and advertising-based company; Yahoo’s attempts to build a social network, like Yahoo 360 and Mash, were already shut down long ago.

One of the soon-to-be-euthanized apps, Delicious, will be a painful one for many: A social-Web pioneer founded in New York, the bookmarking service was one of the first hints at the promise of “social news”, and remains a favorite among loyalists.

This article was first published as a blog post on CNET News.

Give us your say on WikiLeaks

A lot has unraveled over the past month–thanks to WikiLeaks–leading to much embarrassment and uneasy tension between heads of states.

In the region, politicians in countries such as Singapore were not left unscathed by the leaked cables, triggering a flurry of opinions both from government officials and the public.

ZDNet Asia now wants to hear your views via an online poll which is also running across various ZDNet global sites including Australia, China, Japan, the United States and United Kingdom.

Here’s the platform to have your views heard. So go ahead and take our poll, and we’ll review the results in the coming weeks.

Why Facebook CEO deserves to be Person of the Year

commentary He’s only 26, he’s been at the helm of his company for less than a decade, and one of the most famous uses for the technology he’s built is that it facilitates tens of millions of people to start virtual farms with cartoon cows. Yet Mark Zuckerberg, the founder of Facebook, is Time magazine’s 2010 Person of the Year–and it’s a title he deserves.

The Web, of course, is sniping at this choice as they often will with sweeping, editorially arbitrary decisions of what’s important and what isn’t. Some prominent members of the media promptly criticized Time for settling on the relatively feel-good choice of Zuckerberg rather than a figure who presents a real threat or controversy. The privacy concerns that Facebook presents are serious, thought-provoking issues worth addressing, but for the most part they have not veered into matters of national security. Facebook has had many a brush with global politics as a grassroots organization tool for a number of activist groups across the world and a crucial campaign vehicle for candidates like Barack Obama, but the argument could be made that it’s the broader Internet itself, not specifically Facebook, that catalyzes this.

Many critics of Time’s Person of the Year choice pointed out the fact that a popular-opinion poll for the same distinction gave the title to the contentious Julian Assange, founder of information dissemination site WikiLeaks–a hero to some and an alleged terrorist to others. Assange, without a doubt, has shaken up relationships between countries, angered sovereign leaders, and stirred international dialogues and debates about free speech, transparency, and security.

Some surmised that Time has grown soft–once unafraid to name Adolf Hitler or Josef Stalin as Person of the Year, that the storied newsweekly no longer has the chutzpah to select someone who evokes too much unease, discomfort, or fear. They pointed to the fact that in 2001, with the September 11 terror attacks fresh and raw in the global imagination, Time chose New York City mayor Rudolph Giuliani, a first-responder among politicians in the wake of the attacks, as Person of the Year rather than terrorist mastermind Osama bin Laden. That’s a legitimate claim.

But in this case, Time’s pick is the right one. For one, Assange did not become a household name until the final three months of the year. But more importantly, the people flipping through the pages of an issue of Time or pressing the page-turn buttons on its Web site may never feel the direct effects of his action. The saga of the recently-captured Julian Assange and his irreverent, almost nihilistic treatment of government secrets is the stuff of a James Bond film or a Stieg Larsson novel. It could forever alter international relations, or it could ultimately become a geopolitical flash in the pan.

In contrast, Facebook, which has said time and again that the onscreen portrayal of Mark Zuckerberg in the movie “The Social Network” is entirely fanciful and that building world-changing technologies used by hundreds of millions of people across the planet is just not that sexy in theory, has bit by bit worked its way into every aspect of ordinary life. It was in 2010 that this reached new heights. Zuckerberg himself leads a no-frills lifestyle and has gone from openly shying away from media exposure to picking and choosing relatively simple, uncontroversial appearances.

It’s easy to mistake austerity for irrelevance. It’s also easy to take something for granted when it’s so commonplace that it’s become ordinary, as Facebook has. Of course you have a Facebook account: “everyone” does. What Mark Zuckerberg has created is so potent and affecting, so much a part of our lives that perhaps we no longer realize its importance and how much it’s altered the ways in which we communicate and connect. And that’s why, with apologies to the elected officials and jet-setting renegades who were relegated to “runners up”, the 26-year-old CEO deserves the Person of the Year title.

This article was first published as a blog post on CNET News.

Facial recognition comes to Facebook photo tags

Taking yet another step in the ongoing process of upgrading its photo-sharing service, Facebook announced Thursday that it will soon enable facial-recognition technology–meaning that when members upload photographs and are encouraged to “tag” their friends, they will be able to choose from a list of suggestions.

Thanks to its treasure trove of user photos that have already been tagged, not to mention personal profile photos, Facebook has built up a huge base of data for gauging exactly who’s in what photo. There are now 100 million photo uploads per day, according to Facebook, and 100 million “tags” each day as well. Tagging is also a hallmark of Facebook’s photo product, which was otherwise bare-bones, difficult to use, and lagged behind competitors at its launch. Being able to annotate each photo with friends’ names was largely what propelled Facebook Photos forward.

“Tagging is actually really important for control, because every time a tag is created it means that there was a photo of you on the Internet that you didn’t know about,” Facebook vice president of product Chris Cox told ZDNet Asia’s sister sister site, CNET. “Once you know that, you can remove the tag, or you can promote it to your friends, or you can write the person and say, ‘I’m not that psyched about this photo.'”

The facial recognition technology has been developed in part by Facebook and in part through licensed technology. (Cox declined to name the companies involved.) It’ll start rolling out to about 5 percent of Facebook’s U.S. users next week. “Assuming that goes well, we’ll just continue to roll it out,” Cox said.

The revamp of the once low-end photo-sharing product has been going on in full force since the spring, when the company acquired a photo-sharing start-up called Divvyshot and put founder Sam Odio in charge of the engineers developing Facebook Photos.

“We wanted to make our Photos product not suck,” Cox said. This fall, the company unveiled a new interface and “bulk tagging.” The addition of facial recognition is another step in that overhaul, he said.

Of course, there will be someone out there who cries foul with regard to how Facebook handles users’ personal information or wonders whether this is a sign that Facebook knows too much about us all. Cox explained that there will be an opt-out for the new feature so that if a member does not want to show up in his or her friends’ tagging suggestions, they won’t.

This article was first published as a blog post on CNET News.

Tiger Airways’ ads ashore on Pirate Bay

Discount airliner Tiger Airways has been left red-faced after its advertisements were found on infamous bittorrent site The Pirate Bay.

Despite the airliner having a policy against funding illegal activity, including bittorrent sites that host pirate media, advertisements for its low-cost Asian flights had sneaked past Tiger‘s eye and were splashed boldly across the website.

The Singapore chapter of the airliner claimed ownership of the ads, but said the breach was the fault of its outsourced advertising agency.

“The ads were placed by an outsourced agency,” the company said. “Upon discovering it, we have instructed them to withdraw our ads from [The Pirate Bay].”

“It is against our policy to advertise on any site that we suspect may be related to undesirable, malicious or illegal acts of content.”

The Australian chapter said its agency has not breached the advertising policy to its knowledge.

The news comes as companies such as Google are under pressure to cut advertisements listed on illegal and notably pro-piracy websites.

This article was first published at ZDNet Australia.

Chrome 9 beta to bring faster, fancy graphics

Mozilla and Microsoft have been racing to see which will be the first to release a production-quality browser with hardware-accelerated graphics, but at the current rate, it could be Google’s Chrome 9 that crosses the finish line first.

Google likely will be issuing Chrome 9 in beta form soon. It had been planned for Tuesday, but Anthony LaForge, a Chrome technical program manager, pushed it back. “The crash rate [of] 400 crashes per million page loads on the browser is simply too high,” he said in a mailing list message.

Hardware acceleration isn’t a simple either-or situation, but rather a long list of possible ways a graphics chip can speed up the task of painting pixels on a screen. Among aspects that can be accelerated: SVG (Scalable Vector Graphics); 2D graphics drawn with the new Canvas feature; font rendering; video decoding and resizing; the graphical formatting, transitions, and transformations of CSS (Cascading Style Sheets); WebGL for 3D graphics; and compositing different elements of a Web page into the single view a person sees.

Chrome is due for at least some of them–compositing, WebGL, and 2D Canvas, for example. However, it’s very much a work in progress: accelerated 2D Canvas is disabled in Windows XP, and a second phase of 2D Canvas acceleration is currently scheduled for Chrome 11.

WebGL holds the potential to dramatically transform the Web, most notably through 3D games but also many other possibilities such as online maps and virtual worlds. Google, with Chrome OS heightening its emphasis on Web applications as an alternative to native, is a major advocate of WebGL.

Chrome relies on the OpenGL interface for 2D and 3D graphics acceleration. That’s complicated on Windows, where OpenGL support is spotty in comparison to Microsoft’s rival DirectX technologies. Google sidesteps the limitation through a project called ANGLE that translates OpenGL commands into DirectX.

Even so, there are plenty of problems. To minimize them, Chrome will come with a blacklist to disable the feature on incompatible computers.

Also of note for Web appliction fans is Chrome 9’s support for IndexedDB, a developing standard that enables Web application storage. That could be instrumental for reinstating Google Apps’ ability to work offline, a major requirement for the success of Chrome OS and the cloud-computing philosophy.

Speaking of Web applications, Chrome 9 also comes with a new task manager to show what Web applications are running, including background applications that might not be immediately apparent.

This article was first published as a blog post on CNET News.

Twitter closes massive funding round

A much-reported funding round for Twitter is finally complete, AllThingsD first reported earlier Thursday. The dollar amount is US$200 million at a US$3.7 billion valuation, led by investment firm Kleiner Perkins Caulfield & Byers, and along with it Twitter has brought Mike McCue, CEO of buzzy iPad app start-up Flipboard, and David Rosenblatt, former CEO of DoubleClick.

The company has confirmed the funding round and additions to the board–amusingly calling it a “stocking stuffer”–but isn’t releasing any further information about how it will be funneled into product strategy or hiring. Presumably, Twitter will continue to ramp up engineering resources (both human and technical) to support a growing user base while its advertising product remains in a malleable, experimental phase. The round will also likely allow early employees to cash out some company stock–much as Facebook did when Russian firm Digital Sky Technologies first invested in the social network.

Twitter’s last funding round was slightly over a year ago, and was no small amount–about US$100 million, a round which one early Twitter investor later said was not orchestrated out of financial need.

The company aggressively has shifted into “business mode” over the past nine months or so, announcing its first revenue strategy in April. In October, CEO Evan Williams stepped aside so that the more finance- and operations-focused Dick Costolo could take over.

This article was first published as a blog post on CNET News.

Microsoft gives Firefox an H.264 video boost

Mozilla is outspoken in its dislike of the patent-encumbered video technology called H.264, but Microsoft, an H.264 fan, is providing a plug-in that will let Windows 7 users use it anyway.

H.264 is a codec–technology to encode and decode video–that’s widely used in videocameras, Blu-ray players, online video streaming, and more. It’s built into Adobe Systems’ Flash Player browser plug-in, but most people don’t know or need to know it’s there.

When it comes to the flagship feature of built-in video support coming to the new HTML5 specification for creating Web pages, though, codec details do matter. Not all browsers support H.264 or its open-source, royalty-free rival from Google, the VP8-based WebM. That means Web developers must make sure they support both formats or provide a fallback to something like Flash. Otherwise they risk leaving some viewers behind.

To help bridge the divide, Microsoft is releasing a plug-in that lets Firefox tap into Windows 7’s native H.264 support for HTML5 video. The move could help pave over some of the new Web’s rough patches, but also irritate WebM fans who want to see the Web move to unencumbered technology.

“H.264 is a widely-used industry standard, with broad and strong hardware support. This standardization allows users to easily take what they’ve recorded on a typical consumer video camera, put it on the Web, and have it play in a web browser on any operating system or device with H.264 support, such as on a PC with Windows 7,” Microsoft said. “The HTML5 Extension for Windows Media Player Firefox Plug-in continues to offer our customers value and choice, since those who have Windows 7 and are using Firefox will now be able to watch H.264 content through the plug-in.”

Microsoft already had offered a related Firefox plug-in that let people watch Windows Media videos on the Web.

Mozilla is working to try to establish WebM as a required codec for HTML5, a specification standardized by the World Wide Web Consortium (W3C).

This article was first published as a blog post on CNET News.

Mark Zuckerberg named Time’s person of the year

Time magazine has chosen Mark Zuckerberg, the 26-year-old founder of Facebook, as its annual Person of the Year, an annual profile and title given to an individual who has “for better or for worse…done the most to influence the events of the year”.

In a year fraught with political turmoil and sweeping actions on behalf of controversial individuals, Zuckerberg was an unexpected choice. A popular vote among Time readers revealed that their pick was Julian Assange, the controversial founder of WikiLeaks. National politicians, international threats, and the leaders of controversial political movements were also up for consideration.

But it’s Zuckerberg who received 2010’s recognition, in a testament to the rising power of a new generation of Silicon Valley innovation and how much the ways in which we communicate have been dramatically changed by the Web and digital media.

“The way we connect with one another and with the institutions in our lives is evolving. There is an erosion of trust in authority, a decentralizing of power and at the same time, perhaps, a greater faith in one another. Our sense of identity is more variable, while our sense of privacy is expanding. What was once considered intimate is now shared among millions with a keystroke,” Time editor Richard Stengel wrote in an editorial explaining the magazine’s choice. “More than anyone else on the world stage, Facebook’s Mark Zuckerberg is at the center of these changes.”

This year, Facebook dominated many a technology headline around the world as the company reached 500 million users around the world, made an extensive set of upgrades to its social-networking product that pushed it further into every corner of our online lives, and became a sensation in Hollywood with the release of “The Social Network“, an acclaimed film about Facebook’s origins that paints an insidious and in some ways fictionalized version of Zuckerberg. Jesse Eisenberg, the actor who played Zuckerberg onscreen, was nominated Tuesday for a Golden Globe award for Best Actor (one of five nominations for the film overall), and there is a strong chance that “Social Network” will pick up a handful of Academy Award nominations as well.

In 2007, in what was then a rare public appearance for Zuckerberg, he said that “once every hundred years, media changes” and implied that Facebook was at the vanguard of a fresh hundred-year change. Zuckerberg was promptly derided for saying something that was at best reflective of youthful cluelessness and at worst a sign of blossoming hubris.

Maybe his math was off and the 100-year estimate wasn’t quite accurate, but three years later it’s clear that Facebook has, in fact, been at the center of electrifying change in the way that we communicate with the people around us and share information. And if Zuckerberg’s relentlessly hands-on approach with Facebook–which seems to have grown even closer and more obvious over the years–is any sign, this could not have happened without the young, flip-flops-clad CEO.

This article was first published as a blog post on CNET News.

E-mail evolves to stay relevant in Web 2.0 era

Even as today’s office arena gets inundated by social media elements, this does not necessarily mean the death of e-mail systems as the latter is heavily entrenched in corporate culture, analysts argued. Rather, it will evolve to include more Web 2.0 features to meet end-users’ needs, they added.

Nick Ingelbrecht, research director at Gartner, for one, pointed out that despite the increased proliferation of social media and Web 2.0 at the workplace, the e-mail remains “alive and well”. He also dismissed reports on the imminent demise of the e-mail platform, saying that these predictions are “exaggerated”.

The analyst said e-mail remains a useful tool because it is good at delivering a rich string of communications”. Its benefits include being able to send personal notes or broadcasted messages to many people, as well as the ability to send attachments such as photos and documents, Ingelbrecht noted in his e-mail.

Social networking services such as Facebook or instant messaging (IM) have their benefits, but rather than compete with e-mail, they meet a different set of communication needs that are more interactive and intimate, Ingelbrecht stated.

The Gartner analyst also said that people will use different communication channels–be it e-mail, Facebook, IM or Twitter–depending on where they are, who they are talking to, what they are saying or doing, and what tools of communications are available at hand. “[In short,] different needs require different communication solutions at the workplace,” he said.

Steve Hodgkinson, research director at Ovum, concurred, adding that e-mail alone is used for a wide range of purposes other than messaging. These include promotional material sent from marketers or announcements and notifications from financial institutions for online commercial services such as Internet banking.

Additionally, for many people the inbox functions as a media feed, a marketing channel, a workflow system, a day-to-day content management system, and an archive or corporate record, he noted.

While enterprise 2.0 collaboration platforms might be able to do these tasks better, Hodgkinson said the existence of e-mails will not be threatened because the latter has long been a staple of office culture and work habits.

“Corporate cultures and work practices change slowly, and e-mail, for all its faults, is simple, direct and useful. Asking people to simply switch platforms misses the point about how deeply entrenched the use of e-mail is in people’s everyday work rhythms,” the Ovum analyst stated.

New social collaboration and IM tools have their merits, but the pace of adoption for enterprise 2.0 communication platforms is “steady, not explosive”, Hodgkinson said. Furthermore, it will take time for an organization’s IT department to set up the required programs, and for users to change their behavior, he added.

Nonetheless, Hodgkinson did not deny that as more Generation Y, or employees born in the late 1980s onwards, enter the workforce, they have brought their Facebook and Twitter preferences to established organizations.

Yet, he expressed skepticism over just how much this demographic of workers will want to mix their work and personal lives using social media platforms in the long run. He believes that the Generation Y workforce would turn to their corporate e-mail accounts for work purposes and leave social media communications for more informal situations, he explained.

Evolution, not revolution
Hodgkinson also mentioned that new enterprise collaboration tools are likely to integrate e-mail rather than totally replace it.

An earlier Gartner report in November seems to corroborate the Ovum analyst’s observation. The report stated that enterprise e-mail and social networks are no longer mutually exclusive and, as a result, new collaboration styles are being created that reflect this overlap.

Google’s e-mail service, Gmail, for instance, has video chat and phone call functions. This fusion ofe-mail with social features is something systems analyst Fredrick Khoo is familiar with at the office. The IT professional told ZDNet Asia: “We still use e-mail, definitely. It’s just that it doesn’t look like the e-mail from 10 years ago.”

How much did ads affect Twitter’s 2010 trends?

Twitter just released a year-end list of top trends for 2010, much as search engines like Google and Bing release their top queries. But it’s a little different here.

Given Twitter’s status as a chattery network of rapid-fire conversations, both breaking news stories and pop culture–including, notably, pop-culture phenomena with small, devoted cult followings–dominate the list. Twitter’s algorithm for calculating top trends favors “novelty over popularity“, meaning that a sudden, unexpected spike from the death of a C-list celebrity may ultimately outrank an ongoing major news story on Twitter’s year-end list.

But in the rankings, there is also insight into Twitter’s own strategy and how some of the products and partnerships it has developed can affect–if not completely alter–conversations across the service. A handful of the trends appearing in Hindsight 2010 were “promoted” trends, a part of the advertising program that Twitter began to roll out this spring, and at least one was the result of an official media partnership with Twitter.

If you look at Twitter’s list of movie-related trends for 2010, for example, at least two of them (“Scott Pilgrim vs. the World” and “Despicable Me”), were promoted at their release via campaigns to purchase trends on Twitter. Given their follow-it-live nature, entertainment awards shows were naturally dominant on the TV trends list. But the top spot goes not to the Oscars or the Grammys, but to the MTV Video Music Awards, which created an official “Twitter Tracker” app in conjunction with the service in order to spur more discussion.

Of course, the majority of entries on Twitter’s Hindsight rankings were what you’d expect them to be–news events that sparked discussion on a broad, global scale. The summer’s World Cup soccer tournament in South Africa was big (“vuvuzela” was the fifth most popular trend overall), as were large-scale disasters like the earthquake in Haiti and the oil spill in the Gulf of Mexico. Oh, and then there’s pop singer Justin Bieber, something that seems to have taken Twitter completely by surprise.

The point, though, is that Twitter’s year-end list seems to prove the potential for manipulating mass conversations–both on behalf of advertisers and via more impromptu viral campaigns–just as much as it proves that Twitter itself has moved far beyond the service that would crash during every Steve Jobs keynote. So maybe this dilutes the “authenticity” of what’s getting talked about on Twitter. It also, quite likely, hints that its fledgling business model has potential.

This article was first published as a blog post on CNET News.

Amazon hardware glitch disrupts EC2 and retail sites

Amazon suffered two hardware failures to its European network on Sunday, leading to widespread disruption to its business services and retail sites.

European Elastic Compute Cloud (EC2) and other cloud services were affected for up to two hours, Amazon said on Monday in an explanatory note on its service health dashboard.

The problem arose after a network device at Amazon’s datacentre in Ireland failed, the company said. While services were being shifted to another device, that too gave out.

Read more of “Amazon hardware glitch disrupts EC2 and retail sites” at ZDNet UK.

Microsoft joins coalition against Google’s ITA buy

On Monday morning, Microsoft became one of the newest members of the FairSearch.org Coalition, a group of companies and technology partners seeking to “support competition, transparency, and innovation in online search”. In the short term, the group’s big target is Google and its intended US$700 million acquisition of online travel firm ITA Software.

The topic is especially relevant for Microsoft due to its use of ITA’s technology in its Bing travel site (formerly Farecast). It makes use of ITA’s algorithms as part of its recommendation system, which tells users when it’s the right time to buy tickets. Microsoft acquired the site back in 2008, before later rolling the technology into MSN Travel, then Bing.com.

Microsoft is joined by U.K.-based search engine Foundem, and online travel agencies Zuji and Level…com out of Singapore and France, respectively. Others already a part of FairSearch.org include Sabre Holdings, Expedia, Kayak, and Farelogix.

As part of the ITA acquisition, Google has promised to honor existing agreements, which would include any with Microsoft and many of the other companies. However, FairSearch.org is arguing that the buy will still stifle innovation and push ticket prices up across the board.

“Acquiring ITA Software would give Google control over the software that powers most of its closest rivals in travel search and could enable Google to manipulate and dominate the online air travel marketplace,” the group said in a statement. “The end result could be higher travel prices, fewer travel choices for consumers and businesses, and less innovation in online travel search.”

Google has refuted such claims on a special site that breaks down the company’s intentions. However, the acquisition remains the focus of a Department of Justice review, which is looking into how the deal will affect Google’s already-powerful place in the search market.

This article was first published as a blog post on CNET News.

Amazon announces storage cloud for 5TB objects

Amazon Web Services has increased the maximum object size it will store from to five gigabytes to five terabytes.

For Amazon Web Services (AWS), an object is a piece of data, file or group of files. AWS assigns an identification key to the object, which is stored across a number of datacentres in AWS’s S3 storage cloud.

AWS announced in a blog post last Thursday that it had boosted the potential size of objects. “A number of our customers want to store very large files in Amazon S3–scientific or medical data, high-resolution video content, backup files, and so forth,” Amazon Web Services wrote. “We’ve raised the limit by three orders of magnitude. Individual Amazon S3 objects can now range in size from one byte all the way to five terabytes (TB).”

Read more of “Amazon announces storage cloud for 5TB objects” at ZDNet UK.

Social media monitoring gains enterprise relevance

Advocates of social media monitoring are unanimous on its necessity and benefits for firms to actively manage their digital presence in today’s socially-driven Web landscape. This would help enhance their business strategies, brand awareness and boost consumer base up a notch, they added.

David Alston, chief marketing officer for Radian6, a social media monitoring provider, said that as social media networks grow larger in size and number, so does the volume of online chatter generated across these platforms.

Tapping into the sources of online interaction, a company can understand consumers’ needs and wants and measure how successful it has been in engaging its customers, he noted. With this in hand, brands can then develop a business strategy fully informed by “social insights”, the executive told ZDNet Asia in his e-mail.

“By leveraging social media effectively from adoption to management, businesses can build a stronger brand and stronger relationships that boost the bottom line,” emphasized Alston.

Benjamin Koe, co-founder and head of client leadership at Singapore-based social media monitoring solutions company JamiQ, held a similar view about the manifold advantages of social media management. He said in an e-mail that the amount of information shared on social media platforms is of great value as companies now have access to customer feedback as well as the ability to detect potential issues and identify fans and critics.

This information helps brands get a clearer, holistic idea of their online reputation as seen by consumers, which would help in better managing their digital presence and make informed decisions, he pointed out.

Koe went on to point out that evaluating one’s online status is a good start for any company looking to explore the benefits of jumping on the social media bandwagon.

“Even if you don’t intend to reach out to consumers [through social media], the least you can do is find out what everyone is saying about your company and products and make sense of it”, he stated.

Sifting relevance from chatter
According to Koe, the biggest challenge in social media monitoring is experiencing an “information overload”. For example, he noted how popular products like the iPhone and its launch could generate thousands of unique user posts an hour. “The vast amount of data produced is making it near impossible for companies to read them all and understand what their customers are saying”, he said.

Furthermore, the information does not just get generated in various social media platforms such as Facebook and instant messaging chat rooms, but also in different languages. This is why JamiQ provides tools that monitor social media across all markets and languages, he said.

Radian6’s Alston pointed to another challenge for enterprises: how best to tap into the vast volume of social conversations, identify those that matter and fit them into existing business systems and processes.

“Companies need a scalable listening, engagement and insight solution so the useful information they find can be used throughout the enterprise,” Alston added.

Having a human touch
Besides creating suitable algorithms to track online chatter accurately, Koe noted that effective social media monitoring needs to have a strong human element to it.

“[What a company] needs is people who are willing to learn and experiment with different platforms. It could be Facebook and Twitter today, but maybe in a year’s time, there could be newer, bigger platforms,” he pointed out.

Ben Israel, digital strategist at public relations company Edelman Singapore, which offers social monitoring services, noted that many tasks such as mining data and monitoring the Web are efficient when outsourced or automated.

More valuable are analytical skills that can help a company know what to look for in the aggregated data, make sense of it, and recognize relevant patterns and trends to apply to critical business decisions, ideas and strategies, he explained.

Ultimately, an organization has to have a basic understanding of social media and familiarity with social platforms. This includes knowing basic online etiquette, and awareness of where conversations about its brand take place the most, whether it is through a Facebook page, a blog post, or Twitter, Israel concluded.

Thomas Crampton, Asia-Pacific director of digital influence at public relations firm Ogilvy, reasoned that knowledge of various social monitoring tools is not as important as understanding how to take a strategic approach to utilizing social media.

“Ogilvy invests a lot in training and spends a great deal of time training people how to use social media and setting up guidelines for interaction,” Crampton remarked. This is the only way to build a strong team for the execution of social media monitoring, he added.

Words too hard? Try Google’s new search filter

Google quietly added an advanced search feature over the last couple of days that sorts the Internet by reading level.

Search Engine Roundtable noticed that when you click on the “advanced search” link next to a Google search box on the right, you’re now presented with an additional option to sort by “reading level,” which lets you “annotate results with reading levels,” “show only basic results,” “show only intermediate results,” and “show only advanced results.”

Google representative said in a statement that the company added this “as yet another way for people to pare down their results to the kinds of pages they’re most interested in.” The company cited teachers looking for materials for grade-schoolers, or researchers looking for detailed materials as those who might want to employ this feature.

An interesting side effect, however, is that the tool allows searchers to compare the average reading levels of the content produced by Web sites by selecting “annotate results with reading levels” and typing the site’s domain into another field.

Google said it developed the categorization system with the help of teachers who were paid to sort Web pages into one of the three buckets, after which it built a statistical model to expand those rankings to the Web at large. Google didn’t provide further details on what type of criteria the teachers used to decide when a page was “basic” or “intermediate.”

For example, 74 percent of the content on CNET News is considered “intermediate,” whereas 72 percent of the content on TMZ is considered “basic” and 72 percent of the content on the National Nanotechnology Institute’s site is considered “advanced.”

Literary snobs, consider this an early Christmas present from Google. And if anyone knows a teacher who participated in this study, please have them get in touch.

This article was first published as a blog post on CNET News.

WikiLeaks fans should think before they botnet

Do you support WikiLeaks? Are you mad at critics trying to snuff it out? Maybe you’re thinking about joining the online protests aimed at shutting down the Web sites of its opponents. Don’t.

A loosely organized group of vigilantes under the name Anonymous have turned the botnet guns of their Operation Payback campaign, which previously targeted antipiracy organizations, onPayPal, Visa, MasterCard, Senator Joe Lieberman, Sarah Palin, and others who have criticized WikiLeaks or stopped doing business with the document-sharing project. The WikiLeaks fallout has hit a frenzy since the site began releasing diplomatic cables last month that have proved embarrassing for the U.S. government’s diplomatic efforts.

The modern-day equivalent of walking the picket line with a sign is launching denial-of-service attacks against target Web sites in order to send a message and try to interfere with their business. But the electronic version is illegal.

“Participating in a botnet with the intention of shutting down a Web site violates the Computer Fraud and Abuse Act,” said Jennifer Granick, a lawyer at Zwillinger Genetski who specializes in Internet law and hacking cases. “The thing people need to understand is that even if you have a political motive, it doesn’t change the fact that the activity is unlawful.”

One person accused of being connected with the attacks has already been arrested. Police in the Netherlands arrested a 16-year-old hacker earlier last week. It’s unclear what his role allegedly was.

Typical botnets are created by criminals who use viruses and other methods to sneak malware onto computers that then allows them to commandeer the machines for distributed denial-of-service (DOS) attacks without the computer owners knowing it. Hijacked computers are being used in the Operation Payback campaign, but the focus has been getting individuals to voluntarily join.

Thousands of people from around the world are downloading the LOIC (Low Orbit Ion Cannon) software so that their computer will attack the targets the Anonymous organizers specify. New versions of the DOS tool have emerged this week. There is a version for Linux and a Windows version that includes a “Hivemind” feature to connect to an Internet Relay Chat server and allow the organizers to control what site the computer targets.

There is even a JavaScript version that runs on any device, including smart phones. “The JavaScript one, you just point the browser at a site and say ‘go,'” said Jose Nazario, senior manager of security research at Arbor Networks.

As many as 3,000 computers voluntarily participated in attacks earlier this week, and an estimated 30,000 others appeared to be hijacked, according to Sean-Paul Correll, a threat researcher at Panda Labs who has been following the attacks closely and communicating with Operation Payback organizers.

There’s a snag, however, for the volunteer botnet protesters–their Internet Protocol (IP) addresses are not masked, so the attacks could ultimately be traced back to the computers launching them, experts say. Of course, it’s up to the discretion of prosecutors as to whether or not individual botnet volunteers will be fingered by authorities.

“There may be strength in numbers,” said Granick. “There’s only so many people the police could go after. But that doesn’t mean that they couldn’t find out who is behind the unmasked IP numbers and file computer charges against them.”

Operation Payback is fending off DOS attacks that have scuttled its efforts. The servers being used to provide the infrastructure for Operation Payback have been taken offline intermittently. No one has taken responsibility for those attacks. “Right now it appears they are regrouping and strategizing for future attacks,” said Correll. (Anonymous explains that its goal is to raise awareness not interfere with targets’ critical infrastructure.)

Meanwhile, a separate campaign sprang up out of nowhere that could give WikiLeaks fans a more legal way of expressing their support for the cause. An online flyer for “Operation Leakspin” published by Boing Boing encourages people to find juicy bits in the leaked cables and spread them virally on the Internet in blog posts and YouTube videos and use unrelated tags that will ensure broad interest.

It’s unclear who is behind Operation Leakspin. “There’s no hierarchical structure (to the Anonymous collective), so when things happen, like their server infrastructure is under attack, people tend to want to take control of the campaign,” Correll said.

“Even though thousands of people want to participate there doesn’t seem to be a cohesive plan about what to do next,” he said. “It’s fizzling out.”

This article was first published as a blog post on CNET News.


Twitter: We aren’t blocking WikiLeaks info

Twitter Thursday tried to put an end to rumors that it’s blocking WikiLeaks-related terms from its list of trending topics–the most popular phrases appearing at a given time throughout the microblogging service.

The reason why terms like #wikileaks and #cablegate fell off Twitter’s trending topics list, according to a post on the official company blog, is simply because not enough people are talking about them.

“Sometimes a topic doesn’t break into the Trends list because its popularity isn’t as widespread as people believe,” the blog post explained. “And, sometimes, popular terms don’t make the Trends list because the velocity of conversation isn’t increasing quickly enough, relative to the baseline level of conversation happening on an average day; this is what happened with #wikileaks this week.”

This may be a testament to the fact that Twitter, with more than 95 million messages posted per day, is no longer the domain of media and politics junkies that it was a few years ago and that the discussion of the controversial WikiLeaks document repository may now be overshadowed on Twitter by other topics–like pop singer Justin Bieber, whom one third-party researcher says is the subject of a full 3 percent of tweets.

A report by Pew Internet Research released today highlighted the service’s penetration into the mainstream, estimating that 8 percent of Americans who use the Web also use Twitter.

This week, after Amazon Web Services and PayPal blocked WikiLeaks, speculation arose as to whether social-media outlets like Facebook and Twitter would suppress any information pertaining to the site or its founder, Julian Assange. Facebook said Monday that it did not plan to ban the WikiLeaks “fan page,” claiming that at present it does not violate the social network’s terms of service.

This article was first published as a blog post on CNET News.


Delays in iTunes song samples cause confusion

Apple has finally rolled out the 90-second samples on songs that are longer than 2.5 minutes, sold in the United States, and that iTunes has managed to equip with the longer preview.

Some bloggers and iTunes users have questioned why longer previews don’t accompany every song. As first reported in August by ZDNet Asia’s sister site, CNET, Apple approached the top-four recording companies last summer about the longer samples that iTunes users can hear to test drive songs before buying. Researchers say that longer song samples stimulate sales.

According to several music industry sources, Apple has only acquired licenses to the longer samples for the U.S., but the company is in talks to acquire rights to extended previews for overseas markets. As for U.S. iTunes users who find a song that otherwise should be eligible for a sample but is without it, the sources said that Apple is hard at work attaching them. It takes time to make the switch but it should be completed very soon, the sources said.

Apple came close to announcing the extended samples last fall. Apple secured the okay from the labels and was ready to announce the previews at a media event on Sep. 1. The event came and went without a peep from CEO Steve Jobs about the longer samples. Turns out that the National Music Publishers Association (NMPA) read CNET’s story about Apple’s plans and informed the company that as far as they were concerned, Apple also needed their approval or there was going to be a problem, managers from the NMPA told CNET.

They told Apple that the trade group, which represents song writers and music publishers, wouldn’t necessarily have a problem offering the songs “gratis,” or for free, but they wanted time to study the deal, which they said Apple didn’t offer them.

Since then, Apple wrote independent record labels and told them that the company planned to offer 90-second previews on songs longer than 2.5 minutes. Apple said that the only way for the indie labels to opt out was to remove their songs.

That the music publishers were able to hold up the offering illustrates growing their growing influence in the music industry. This is one area of the business that typically is still profitable. In the past, a label’s recorded-music division largely steered the ship. But recently, labels have turned to music publishing for guidance. EMI Group tapped Roger Faxon, the former head of the label’s publishing arm, to run the entire company.

If music publishing’s power continues to grow, look for David Israelite, the NMPA’s CEO, to step more into the spotlight.

This article was first published as a blog post on CNET News.

Ex-WikiLeakers to launch new Openleaks site

WikiLeaks will soon have some competition on the whistle-blowing front.

Several people who resigned from the WikiLeaks project amid conflicts with organizer Julian Assange are planning to launch a new site called Openleaks next Monday, Swedish newspaper Dagens Nyheter reported today.

“Our long-term goal is to build a strong, transparent platform to support whistle-blowers–both in terms of technology and politics–while at the same time encouraging others to start similar projects,” an Openleaks organizer, who wished to remain anonymous, told the newspaper. “As a short-term goal, this is about completing the technical infrastructure and ensuring that the organization continues to be democratically governed by all its members, rather than limited to one group or individual.”

Assange’s former partners left WikiLeaks because of the “top-down management style” and because Assange’s personal problems were distracting from the work, the report said.

Openleaks will not directly publish information it receives but will allow media outlets and other organizations to access the system and disclose what they want, according to internal Openleaks documents. The group will serve as a neutral intermediary with no political agenda, which could minimize any heat from governments.

“As a result of our intention not to publish any document directly and in our own name, we do not expect to experience the kind of political pressure which WikiLeaks is under at this time,” a source told the newspaper. “In that aspect, it is quite interesting to see how little of politicians’ anger seems directed at the newspapers using WikiLeaks sources.”

The news comes amid turmoil for WikiLeaks and its public face, Assange. Assange is behind barsin London on Swedish sex-related charges, which he denies.

Meanwhile, the WikiLeaks site has had to rely on mirror sites after getting disconnected by infrastructure providers and its funding has been impacted by PayPal, Visa, and MasterCardhalting accepting payments to the group. Activist supporters of WikiLeaks have been targeting those firms with denial-of-service attacks and getting attacked themselves, as well as booted offTwitter and Facebook.

During the summer, WikiLeaks released confidential documents on the wars in Afghanistan and Iraq and last month its release of 250,000 diplomatic cables further angered U.S. officials.

This article was first published as a blog post on CNET News.


MyCube: Protect online persona from ‘digital murder’

Start taking personal responsibility and control of your digital identity, or risk subjecting your online persona to “evil” companies such as Facebook and Google to freely exploit for monetary gains.

That is the message Johan Stael von Holstein, the Swede Internet entrepreneur behind startup MyCube, wants to spread globally and in Asia.

In an interview with ZDNet Asia, he noted that a person’s digital assets and digital identity should not be owned by anybody “but yourself”. Digital assets or Web properties can range from an individual’s blog posts, photos, personal profile, experience and interaction with his contacts on a social networking site, e-mail messages and addresses, virtual credits and online billing histories.

von Holstein, the 47-year-old CEO of startup MyCube–which he founded in 2008–said most Internet users have yet to realize that they are “giving away parts of their brains, knowledge and experience to Web 2.0 companies to freely exploit for their own benefit”.

In two to three years’ time, however, he said more people will mature and finally grasp that these companies are exploiting consumers’ digital assets for considerable monetary value. Then, online users will take steps to protect their digital identity, he noted, enabling them to have full control and discretion of how their online content should be organized and managed–and subsequently monetized, should they choose to.

And when they do, MyCube is banking on its digital life management services to help users regain “privacy, control and ownership” of their digital lives. The MyCube Vault, for example, which was launched Nov. 28, is an open source storage application that allows users to make on-demand or automated backups of their social media content from various sites, such as Facebook, e-mail, Flickr, among others. The aggregated content can then be stored in their computer’s harddrive.

The company is now prepping to the public release of its second service, MyCube Exchange, touted as a user-centric, content-rich, next-generation social network where users have complete control over their privacy and interactions on the Web. For instance, the site’s default privacy setting for a profile’s will be ‘private’ instead of ‘public’.

Currently available in private beta, von Holstein said the public beta of Exchange is set to launch Jan. 15 next year in Singapore, Sweden, Great Britain and Australia, and will go global Feb. 15.

According to the CEO, there are already “tens of thousands of pre-registrations” for Vault and Exchange across the globe.

Before MyCube, the Swede had co-founded Web consultancy firm IconMedialab in 1996, set up e-commerce site Letsbuyit.com–which has since been sold–and started entrepreneurship center IQube.

Facebook is “dead”
For von Holstein, user privacy boils down to one thing: “You cannot trust anybody but yourself.”

While the Internet entrepreneur acknowledged that MyCube is akin to the antithesis of Facebook–at least, in its stance on user privacy–he noted that “MyCube isn’t going to kill Facebook”. “That’s not my ambition… I just want to make sure that Facebook cannot spy and steal [content that] is not theirs,” he said.

Vocal in his criticism of Facebook co-founder and CEO, Mark Zuckerberg, von Holstein dismissed 26-year-old Zuckerberg’s statement that “the age of privacy is over”.

von Holstein said: “He’s one of the richest guys in the world. Do you think he’s going to put on Facebook where he lives and when he’s out driving?

“I’m absolutely sure of the fact that Facebook is dead, whether MyCube launches or not. [Zuckerberg] just doesn’t know it yet. People [are going to revolt] for a thousand different reasons,” he said. “The [monetary] values he is stealing off individuals that are not his–that monetization needs to be given back to these individuals.”

Committing “digital murder”
According to von Holstein, taking personal responsibility of one’s digital life is also essential in the volatile corporate world. Elaborating, he referred to the past glories of Internet kings, search engine Alta Vista and Web browser Netscape, which shelf-lives have since expired.

“We do not know who is going to run Facebook 10 years from now or whether it will get bought by China. What if Flickr goes bankrupt, what happens to the thousands of photos I uploaded there? There are a thousand reasons why we cannot give away our digital assets to somebody else,” he said.

Describing the removal of one’s online identity as equivalent to “digital murder”, he reiterated the importance of protecting protect one’s digital life.

“If someone deletes your physical identity, we call it murder, and it’s illegal for a very good reason. So if [a social network] deletes me, my digital identity, isn’t that digital murder? Shouldn’t that be totally illegal?” he questioned.

von Holstein recounted how the Facebook account of his son, then 10 years old, was deleted by the social networking site after an adult–whose friend request was rejected–made a report against his son.

“My son’s Facebook friends were as true and real to him as our relationships in the real world. [Facebook] totally took away his online social status–all his 350 friends, gone. He was digitally murdered by Mark Zuckerberg.

“And if Mark Zuckerberg finds out I’m building MyCube and dislikes me, he can delete my Facebook account,” he added. “[All] my friends and saved messages are in the hands of a guy, who at his will at any time, can delete me just because he wants to. That is a nightmare.”

Twitter brings more media, music to Web client

Twitter announced five new content partners on Monday that bring more multimedia and video content into its recently overhauled Web site, furthering the company’s march away from simply coughing up streams of 140-character messages. The latest arrivals on Twitter.com are videos from syndication platform Blip.tv, hipster-filtered photos from trendy iPhone app Instagram, full-length streaming songs from Rdio, presentations from Slideshare, and works from artist community site Dipdive.

So what does this mean? When you click on a link in a tweet that you see in your stream on Twitter.com, if it comes from one of Twitter’s content partners, a specialized widget will pop up in lieu of a separate link.

The number of content partners Twitter currently offers is at around 20 and continues to grow. Last month, Twitter integrated iTunes’ Ping service into its Web client to provide song previews and links to purchase music.

It’s all been part of a massive operation to make the Twitter.com site a bigger draw for users, many of whom had opted to use third-party desktop and Web clients instead of the more basicoriginal Web site. Some of its moves have been controversial as Twitter itself cuts into the territory of some of the developer applications that its open-ended API facilitated in the first place.

The third-party Web client most similar to Twitter’s new look, Brizzly, got lucky: It sold to AOL earlier this year and its team is now working on AOL’s own Lifestream feed-aggregation service.

This article was first published as a blog post on CNET News.

Facebook: We won’t block WikiLeaks, for now

The biggest social-networking site in the world broke with many of its online brethren Wednesday when it issued a statement saying that it will not ban content from a “fan page” associated with WikiLeaks, the controversial repository of leaked confidential documents whose founder, Julian Assange, is currently on the run.

“The WikiLeaks Facebook Page does not violate our content standards nor have we encountered any material posted on the page that violates our policies,” said the statement, which was prepared when ReadWriteWeb’s Marshall Kirkpatrick started poking around to see which online services may follow the lead of Amazon Web Services and PayPal in blocking WikiLeaks. It’s a well-crafted statement, however, one that leaves open the possibility Facebook could change course. All it’s saying right now is that Facebook does not currently believe WikiLeaks has posted content to its page that violates the social network’s terms of service.

Facebook’s handling of whether to block controversial and potentially harmful content from its servers has not been without criticism: it has opted not to ban groups pertaining to Holocaust denial, for example, claiming that while it finds Holocaust denial “repulsive and ignorant,” the groups are allowed to stay on the social network if they do not contain illegal material. WikiLeaks, obviously, is a different and far more complicated matter entirely. Many believe Assange could have blood on his hands for leaking documents that could put the U.S. or its allies in danger overseas, and the incoming chairman of the House Homeland Security Committee has said he wants WikiLeaks listed as a terrorist organization. However, Assange has also become a hero for free-speech and government transparency advocates.

It’s a status that has only been elevated since the recent WikiLeaks document releases and the subsequent attempts by corporations and lawmakers to stop Assange. The WikiLeaks page on Facebook has nearly 1 million followers.

This article was first published as a blog post on CNET News.


Fast-encryption feature arrives in Chrome

Google has begun shipping a feature called False Start in its Chrome browser to speed up secure communications.

False Start essentially cuts out one set of the back-and-forth conversation needed to set up a secure channel between a Web browser and Web pages. Such secure channels use technology called SSL (Secure Sockets Layer) or TLS (Transport Layer Security), and a Web site using it shows an address beginning with HTTPS rather than HTTP.

“The latest releases of Chrome now enable a feature called SSL False Start,” said Google programmer Mike Belshe in a blog post last Sunday. “As of this writing, Chrome is the only browser implementing it.”

Belshe’s tests showed it cutting off less than a tenth of a second. That may not sound like much, but bear in mind that Web developers strive to shave off any amount they can and that the security handshake often must be completed more than once for a single Web site because of multiple secured elements.

False Start is one of a handful of technologies Google is building into Chrome to try to make the Web faster. Faster encrypted communications are a particular focus, especially with the debut of the Firesheep software that can extract personal data from unsecured Web communications.

False Start is a nice technology because unlike many communication improvements, it requires an improvement to the browser but not to the other end of the line. But there’s a wrinkle: some Web sites can’t handle False Start, and they don’t fail gracefully.

Thus, Chrome has a blacklist to disable the feature for these sites. According to the Chrome source code, that list is 5,106 sites long so far.

This article was first published as a blog post on CNET News.