Latest Technologies guide news

Posted: December 13, 2010 in Latest Techguide

5 tips for learning how to use Server Core

As organizations work to increase the density of the virtual servers running on their host servers, many are turning to Server Core deployments.

Server Core lacks a lot of the GUI features found in more traditional Windows Server deployments. It’s a lightweight server operating system, which makes it ideal for use in virtual data centers. Even so, there’s no denying that Server Core can be a bit intimidating and that a learning curve is associated with managing Server Core operating systems.

In this article, I will provide five tips for learning how to use Server Core.

1: Set up a lab machine
Without a doubt, the best advice I can give you is to set up a few lab machines and install Server Core. That way, you can experiment with configuring and managing the operating system without having to worry about harming your production systems.

As you do, don’t be afraid to get your hands dirty. The deeper you dig into Server Core on your lab machines, the better equipped you will be to manage Server Core deployments in the real world.

2: Understand the difference between the command line and PowerShell
I have read several blog posts that have incorrectly reported that administrators must use PowerShell cmdlets to manage Server Core operating systems. Although Server Core is managed from the command line, there is a difference between the command line and PowerShell.

The command line traces its roots back to DOS and has existed in one form or another in every version of Windows ever released for the X86 / X64 platform. Although some command -ine commands will work in PowerShell, PowerShell commands will not work in a command-line environment.

The command line is the primary interface for managing Server Core. In fact, PowerShell isn’t even natively supported on Windows Server 2008 Server Core servers (although there is an unofficial workaround that can be used to add PowerShell support). PowerShell is natively available on Server Core servers that are running Windows Server 2008 R2, but it’s not installed by default. Microsoft Support provides instructions for enabling PowerShell.

3: Check out the available graphical utilities
Even though the whole point of Server Core is that it’s supposed to be a lightweight server OS without a GUI, it actually does have a GUI. Several graphical utilities can help you with the initial server configuration process.

The best of these utilities (in my opinion) is Core Configurator 2.0, an open source utility that’s available as a free download. It’s designed to help you to do things such as naming your server, configuring its network settings, and licensing the server.

In addition, Microsoft includes a configuration utility called Sconfig with Windows Server 2008 R2. Simply enter SCONFIG.CMD at the command prompt, and Windows will launch the Server Configuration utility. This utility is similar to the Core Configurator, but its options aren’t quite as extensive. The Server Configuration utility will help you to do things like joining a domain or installing updates.

4: Don’t forget about graphical management tools
When you manage a normal Windows 2008 server, you use built-in management utilities, such as the Active Directory Users And Computers Console and the Service Control Manager. Although such utilities connect to the local server by default, they’re designed to let you manage other servers on your network, including servers that are running Server Core.

Even though Server Core operating systems don’t come with a comprehensive suite of management utilities, there is absolutely nothing stopping you from connecting to a core server from another server’s management consoles and managing that core server in exactly the same way that you would if it were running a graphical Windows Server operating system.

5: Learn Server Core’s limitations
Because Server Core is a lightweight server operating system, it’s not suitable for all purposes. Plenty of third-party applications simply will not run on a Server Core deployment.

In addition, many of the roles and role services that are often run on traditional Windows Server 2008 R2 servers are not supported on Server Core deployments. The actual roles that are supported by Server Core vary depending on the edition of Windows you are installing.

For instance, Windows Server 2008 R2 Web Edition supports only three roles, while the Datacenter and Enterprise Editions support 11 roles:

  • Active Directory Certificate Services
  • Active Directory Domain Services
  • Active Directory Lightweight Directory Service
  • BranchCache Hosted Cache
  • DHCP Server
  • DNS Server
  • File Services
  • Hyper-V
  • Media Services (this role must be downloaded separately)
  • Print Services
  • Web Services (IIS)

Microsoft provides a full list of the roles that are supported by the various editions of Windows Server 2008 R2.

Brien Posey is a seven-time Microsoft MVP. He has written thousands of articles and written or contributed to dozens of books on a variety of IT subjects.

Obtaining network information with netstat

One of the best utilities on Linux for network troubleshooting is a very simple one: netstat.

Netstat can provide a lot of information, such as network connections, routing tables, interface statistics, and more. It displays information on various address families, such as TCP, UDP, and UNIX domain sockets.

Of course, all of this can also make it a daunting tool to use if you have never used it before.

While netstat is useful as a regular user, to get the most out of it, it will need to be run by the root user. For instance, to determine what program is listening to a port or socket (the -p switch), you must have sufficient root privileges.

To see all of the TCP ports being listened to on the system, and by what program, use:

# netstat -l --tcp -p
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address    State       PID/Program name
tcp        0      0 *:ssh                       *:*                LISTEN      1666/sshd
tcp        0      0 localhost.localdomain:smtp  *:*                LISTEN      1841/sendmail: acce
tcp        0      0 *:mysql                     *:*                LISTEN      1807/mysqld
tcp        0      0 *:http                      *:*                LISTEN      1873/httpd
tcp        0      0 *:https                     *:*                LISTEN      1873/httpd

From the above, you can see that sshd is listening to port 22 (netstat will display the port name from /etc/services unless you use the “-n” switch), on all interfaces. Sendmail is listening to port 25 on only the loopback interface (127.0.0.1), and Apache is listening to ports 80 and 443, while MySQL is listening to port 3306 on all available network interfaces. This gives you an idea of what services are running, and what ports they are listening to; this is one way to determine if something is running that shouldn’t be, or isn’t running when it should be.

The same can be done for UDP, again, to make sure that nothing is listening for active connections that shouldn’t be:

# netstat -l --udp -p -n
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address    State       PID/Program name
udp        0      0 0.0.0.0:68                  0.0.0.0:*                      1292/dhclient
udp        0      0 192.168.250.52:123          0.0.0.0:*                      1679/ntpd
udp        0      0 127.0.0.1:123               0.0.0.0:*                      1679/ntpd
udp        0      0 0.0.0.0:123                 0.0.0.0:*                      1679/ntpd
udp        0      0 0.0.0.0:42022               0.0.0.0:*                      1292/dhclient
udp        0      0 ::1:123                     :::*                           1679/ntpd
udp        0      0 fe80::226:18ff:fe7b:123     :::*                           1679/ntpd
udp        0      0 :::123                      :::*                           1679/ntpd
udp        0      0 :::15884                    :::*                           1292/dhclient

As you can see from the above, netstat will display anything listening to IPv4 or IPv6 addresses.

Netstat isn’t restricted to telling you what is listening to ports; it can also tell you active connections, like this:

# netstat --tcp -p
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address             Foreign Address        State  PID/Program name
tcp   0      0 wrk.myhost.com:53231    wrk2.myhost.com:ssh         ESTABLISHED 3333/ssh
tcp   0      0 wrk.myhost.com:44401    iy-in-f113.1e100.net:http   TIME_WAIT   -
tcp   1      0 wrk.myhost.com:51848    204.203.18.161:http         CLOSE_WAIT  2729/clock-applet
tcp   0      0 wrk.myhost.com:821      srv.myhost.com:nfs          ESTABLISHED -
tcp   0      0 wrk.myhost.com:59028    iy-in-f101.1e100.net:http   TIME_WAIT   -
tcp   0      0 wrk.myhost.com:37120    dns.myhost.com:ldap         ESTABLISHED 1658/sssd_be
tcp   0      0 wrk.myhost.com:ssh      laptop.myhost.com:52286     ESTABLISHED 3274/sshd: joe [

From the above, you can see that the first connection is an outbound SSH connection (originating from port 53231, destined for port 22). You can also see some outbound HTTP connections from the GNOME clock-applet, as well as outbound authentication requests from SSSD, and outbound NFS. The last entry shows an inbound SSH connection.

The -i switch provides a list of network interfaces and the number of packets transmitted:

# netstat -i
Kernel Interface table
Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0       1500   0    60755      0      0      0    40332      0      0      0 BMRU
lo        16436   0      149      0      0      0      149      0      0      0 LRU

An interesting “watchdog” use of netstat is with the -c switch, which will print a continuous listing of whatever you have asked it to display, refreshing every second. This is a good way to observe changes that are happening (connections being opened, etc.).

Finally, you can use netstat in place of other commands: netstat -r shows a kernel routing table, similar to route -n and netstat -ie shows interface information identical to ifconfig.

Netstat can provide a lot of information that can be very useful in tracking down various network related problems, or just to keep an eye on the system, making sure that no unauthorized programs are listening for incoming network connections.

Keep in mind that netstat tells you what is actively listening or connected; it cannot tell you if a firewall is blocking that port. So while a service might be noted as listening, it may not actually be accessible. Netstat doesn’t provide the entire picture, but it can certainly help provide useful clues.

Vincent Danen works on the Red Hat Security Response Team and lives in Canada. He has been writing about and developing on Linux for over 10 years.

Social media a double-edged sword for SMBs

With small and midsize businesses (SMBs) accounting for over 90 percent of the Philippine economy, technology–in particular, social media–is seen as a valuable tool to democratize the playing field and make it easier for local companies to compete with the industrial giants.

Adopting social networks is ideal for local SMBs since the Philippines now boasts the highest usage of online social activities in the Asia-Pacific region, according to online analyst comScore.

“Social media is a powerful tool and with great power comes great responsibility.” 

— Joey Alarilla
Yahoo Southeast Asia

Although it is second to Indonesia in the region in terms of Facebook user base, the Philippines has the highest penetration rate of social media users with 90.3 of the country’s Web population owning a Facebook account.

But, experts warned that social media could become a double-edged sword if deployed by an overzealous company that does not have a proper strategy in place.

“Social media is not a silver bullet. It won’t magically transform your company,” said Manila-based Joey Alarilla, head of social content strategy for Yahoo Southeast Asia. “If your product or service isn’t good and there are no efforts to improve it, social media will only highlight your inadequacies and annoy your customers.”

In an e-mail interview, Alarilla explained that entrepreneurs must stick to their business objectives and remember the purpose of establishing conversations with their online audience.

“Business owners should avoid saying anything they may regret online,” he said. “Social media is a powerful tool and with great power comes great responsibility. We’ve seen many cautionary tales of businesses that have suffered public embarrassment and backlash against their brands when conversations become too heated.”

Plunging headlong into the social Web without preparation, according to Filipino social media guru, Sonnie Santos, could also result in the miscommunication of the company’s message to its target market.

The tell-tale signs of an ill-conceived social media strategy include having unclear or no rules on online engagement, untrained employees, and the lack of a point-person to support a social Web campaign, Santos said.

However, he warned that SMBs would also be missing out on the market of young professionals if they ignore social media as a communication tool. “[But] if used ignorantly, resources are wasted, productivity is lost, and online reputation can be damaged by employees who use the tool without proper guidance,” he added.

Despite the potential pitfalls, embracing the Web as well as a social media policy will likely prove to be the cheapest and most effective way for SMBs to expand their footprint.

Timothy Birdsall, director of Lotus software at IBM Asia-Pacific, which recently launched a social media-enhanced messaging suite in the Philippines, said resource-strapped SMBs will only need to invest in the initial setup to roll out a social media strategy.

Birdsall added: “All they need to do is to create a profile. Put that out, together with their capabilities, and people will find them online. The savings will be infinite.”

Santos, however, recommended that SMBs should also hire a consultant to craft their online philosophy and social Web policy, as well set up the site’s integration with other social media accounts.

Blending social with existing channels
Alarilla noted that social media should also be “part of a 360-degree marketing campaign” so that it complements the company’s online display advertising, search marketing, events and print advertising.

“For SMBs, social media is great at bringing you to where your customers are, and giving your company a human face as you engage them on social networks,” he said.

As social media is no panacea, it should only be deployed by SMBs in areas where it can be used as an effective and measurable digital marketing tool.

Santos highlighted relevant departments within an organization that should use the social Web: marketing; customer service and relations; human resources to support recruitment, training, corporate communications and employee engagement; and operations, which is applicable only in certain industries.

He added that the level of engagement would depend on the nature of business and target market. “B2Cs (business-to-consumers) should employ a deeper level of engagement, while B2Bs (business-to-business) should use social Web primarily to manage their online reputation,” he said.

Alarilla suggested that rather than formulate their social media strategies from scratch, SMBs could explore social media platforms that have been built specifically for their needs.

For instance, he explained that Yahoo currently has a location-based social networking site in Indonesia called, “Koprol for Business”. The service has been designed for SMBs to create self-managed business listings and targets users who are in the vicinity of their business to improve their chances of engaging with customers, he said.

He added that social media can only be effective if every stakeholder within the enterprise embraces it.

“It should transform your company’s internal processes and break down silos,” he said. “Ideally, every employee should become a social media evangelist for the company, just as your company’s goal is to turn your users into your brand advocates.”

“Prior to plunging into social media, companies should manage expectations and make stakeholders realize that social media is not a sprint, but a marathon that should be part of a long-term business strategy and overall communication plan,” he concluded.

Melvin G. Calimag is a freelance IT writer based in the Philippines.

Avoid getting buried in technical debt

In an experience report on the benefits of object-orientation for OOPSLA ’92, Ward Cunningham observed:

Another, more serious pitfall is the failure to consolidate. Although immature code may work fine and be completely acceptable to the customer, excess quantities will make a program unmasterable, leading to extreme specialization of programmers and finally an inflexible product. Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object- oriented or otherwise.

Cunningham’s debt metaphor has since become popularized as “technical debt” or “design debt”. The analogy applies well on several levels.

Debt service. Steve McConnell points out that where there’s debt, there’s interest. Not only do you eventually have to implement a correct replacement (paying back the principle), but in the mean time, you must work around what you implemented incorrectly (that’s the interest). Doing it wrong takes up more time in the long run, although it might be faster in the short term.

Deficit coding. Some organizations treat technical debt similarly to how some governments and individuals treat fiscal debt: they ignore it and just keep borrowing and spending. In terms of technical debt, that means ignoring the mistakes of the past and continuing to patch new solutions over old problems. Eventually, though, these patches take longer and longer to implement successfully. Sometimes, “success” gets redefined in terms of the minimum required to get by. More significantly, the entire system becomes more brittle. Nobody fully comprehends all of its dependencies, and even those who come close can’t make major changes without breaking things. Users begin to wonder why problems take so long to get fixed (if they ever do) and why new problems arise with every release.

Write-offs. You always have the alternative of declaring technical bankruptcy. Throw out the project and start over. As in the financial world, though, the consequences of that decision aren’t trivial. You can lose a lot of credit with users and supporters during the interim when you don’t have a product. Furthermore, a redesign from the ground up is a lot more work than most people realize, and you have to make sure it’s done right. The worst possible scenario would be to spend millions of dollars, years of effort, and end up with only a newer, shinier pile of technical debt. The very fact that you’re considering that kind of drastic measure indicates strongly against your success: the bad habits that got you here have probably left thousands of critical system requirements completely undocumented. Good luck discovering those before you ship something to customers.

It’s not all bad. Strategic debt can leverage finances, and the same holds true in the technical world. Sometimes you need to get to market more quickly than you can do it the right way. So, you make a strategic decision to hack part of the system together, with a plan to go back later and redesign that portion. The key here is that you know that you’re incurring a debt, and it’s all part of a plan that won’t allow that debt to get out of control. It’s intentional, not accidental.

That’s the main benefit of using the technical debt metaphor: awareness. Too often, after a particularly bloody operation on a piece of unmaintainable code, a developer will approach his or her manager with “We really need to rewrite this module”, only to be brushed off with “Why? It’s working now, isn’t it?” Even if the developer possesses the debating skills necessary to point out that all subsequent changes to this code would benefit from taking some time now to refactor it, the manager would rather take chances on the future, because “we’ve got enough on our plate already”.

By framing the problem in terms of the debt metaphor, its unsustainability becomes clear. Most professionals can look at a balance sheet with growing liabilities and tell you that “somethin’s gotta change”. It isn’t always so apparent when you’re digging a similar hole technically.

Chip Camden has been programming since 1978, and he’s still not done. An independent consultant since 1991, Chip specializes in software development tools, languages, and migration to new technology.

Disable UAC for Windows Servers through Group Policy

User Account Control (UAC) is a mechanism in Windows Server 2008, Windows Server 2008 R2, Windows 7, and Windows Vista that provides interactive notification of administrative tasks that may be called by various programs. Microsoft and non-Microsoft applications that are installed on a server will be subject to UAC. The most visible indicator that UAC is in use for a file is the shield ribbon identifier that is put on a shortcut (Figure A).

Figure A

Windows Server 2008 and Windows 7’s UAC features are good, but I don’t feel they are necessary on server platforms for a general-purpose system. The solution is to implement three values in a Group Policy Object (GPO) that will configure the computer account to not run UAC. These values are located in Computer Configuration | Policies | Windows Settings | Security Settings | Local Policies | Security Options with the following values:

  • User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode
  • User Account Control: Detect application installations and prompt for elevation
  • User Account Control: Turn on Admin Approval Mode

These values are set to Elevate Without Prompting, Disabled, and Enabled respectively to turn off UAC for computer accounts. This GPO is shown in Figure B with the values set to the configuration elements.

Figure B

Click the image to enlarge.

In the example, the GPO is named Filter-GPO-ServerOS to apply a filter by security group of computer accounts. (Read my TechRepublic tip on how to configure a GPO to be applied only to members of a security group.) A good practice would be to apply the GPOs to a security group that contains server computer accounts, and possibly one for select workstation accounts. This value requires a reboot to take effect via Group Policy. Also, the UAC shield icon doesn’t go away, but subsequent access to the application doesn’t prompt for UAC anymore.

I know some server admins are fans or UAC, while others prefer to disable the feature. Do you disable UAC? Share your perspective on this feature.

Tips and tricks to help you do more with OpenSSH

Previously, we looked at the basics of key management in OpenSSH, which in my opinion, really need to be understood before you start to play with all the other fine trickery OpenSSH offers. Key management is important, and easy, and now that we all understand how to manage keys, we can get on with the fun stuff.

Because I take OpenSSH for granted, I don’t really think about what I do with it. So here are some pointers and tips to various SSH-related commands that can make life easier, more secure, and hopefully better. This really is just the tip of the iceberg; there is so much more that OpenSSH can do, but I hope this at least gives you some new tricks and inspires some further investigation.

Running remote X applications
If you want to run a remote X11 program locally, you can do that via OpenSSH, taking advantage of its encryption benefits. With X running, open a terminal and type:

$ ssh -fX user@host firefox

This will fire up FireFox on the remote computer, and display the output over an encrypted SSH connection on the local display. You will need X11Forwarding yes enabled on the remote server (usually it is; if not, check /etc/ssh/sshd_config or /etc/sshd_config).

Easy connections to remote using screen
When you first log into a system and run screen, you have multiple terminals open that can be switched around. If you need to disconnect from the system, have a network outage, or switch from one wireless network to another, running the remote session under screen will prevent whatever processes are currently running from terminating prematurely. However, when you do run screen like this, typically you would log in directly and then start, or resume, screen.

Instead, you can do this with one command, which has the advantage of logging you out immediately when disconnecting from the screen:

$ ssh -t user@host screen -r

This also has the benefit of not starting an extra shell process just to launch screen. This will not work, however, if screen is not already running on the remote host.

Also note that you can run almost any command remotely, like this. The -t command forces pseudo-tty allocation, so you can use this to run simple commands, or interactive commands like a MySQL client login, or alternatives to screen like tmux.

Encrypted tunnels to remote hosts
This is one of the best uses of OpenSSH. With tunneling, you can tell OpenSSH to create a tunnel to a port on the remote server, and connect to it locally. For instance, if you run a private web server, where port 80 is not available to the internet (via a firewalled port), you can use the following to connect to it:

$ ssh -N -L8080:127.0.0.1:80 user@remotehost

Then point your browser to http://127.0.0.1:8080 and it will connect to port 80 on remotehost, through the SSH tunnel. Keep in mind that, for web connections at least, it will only connect to an IP, so name-based virtual hosting is out, or at least reaching a name-based virtual host would be.

On the other hand, if you have a MySQL service or some other firewalled service, you can use the same technique to get to that service as well. If you wanted to connect to MySQL on remotehost you might use:

$ ssh -N -L3306:127.0.0.1:3306 remotehost

Then point your MySQL client application to the localhost (127.0.0.1) and port 3306. The general syntax of the -L command is “local_port:local_ip:remote_port”.

Creating a SOCKS5 proxy
One really neat thing OpenSSH can do is create a SOCKS5 proxy, which is a direct connection proxy. This allows you to tunnel all HTTP requests, or any other kind of traffic that can be sent through a SOCKS5 proxy, via SSH through a server you can access. This might be useful at a coffee shop, for instance, where you want to direct all HTTP traffic through your SSH proxy to your system at home or the office, in order to avoid potential snooping or data theft (looking directly at you, FireSheep).

The command I use to create the SOCKS5 proxy using OpenSSH is:

$ ssh -C2qTnNM -D 8080 user@remotehost

This creates a compressed connection that forces pseudo-tty allocation, as well as places the ssh client into master mode for connection sharing (see man ssh for more details on the other options). The proxy will live on port 8080 of the local host. A quick test is to use something like curl with whatismyip.com:

$ curl --socks5 127.0.0.1:8080 www.whatismyip.com/automation/n09230945.asp

Call curl with that command, then compare it to using curl on that URL directly and you should see two different IP addresses –the first being the remote server’s IP, and the second being your own.

Since curl is really only useful for testing, check out FoxyProxy for Firefox in order to make Firefox use the proxy.

These are just a few things that OpenSSH can do, but I think they’re very useful. OpenSSH truly is a ubiquitous Swiss-Army knife utility; it is pre-installed and available on pretty much every major operating system with the exception of Windows. It may be intimidating if you’re just figuring it out for the first time, but spend some time playing with it and that investment will definitely pay off.

Avoid getting buried in technical debt

In an experience report on the benefits of object-orientation for OOPSLA ’92, Ward Cunningham observed:

Another, more serious pitfall is the failure to consolidate. Although immature code may work fine and be completely acceptable to the customer, excess quantities will make a program unmasterable, leading to extreme specialization of programmers and finally an inflexible product. Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object- oriented or otherwise.

Cunningham’s debt metaphor has since become popularized as “technical debt” or “design debt”. The analogy applies well on several levels.

Debt service. Steve McConnell points out that where there’s debt, there’s interest. Not only do you eventually have to implement a correct replacement (paying back the principle), but in the mean time, you must work around what you implemented incorrectly (that’s the interest). Doing it wrong takes up more time in the long run, although it might be faster in the short term.

Deficit coding. Some organizations treat technical debt similarly to how some governments and individuals treat fiscal debt: they ignore it and just keep borrowing and spending. In terms of technical debt, that means ignoring the mistakes of the past and continuing to patch new solutions over old problems. Eventually, though, these patches take longer and longer to implement successfully. Sometimes, “success” gets redefined in terms of the minimum required to get by. More significantly, the entire system becomes more brittle. Nobody fully comprehends all of its dependencies, and even those who come close can’t make major changes without breaking things. Users begin to wonder why problems take so long to get fixed (if they ever do) and why new problems arise with every release.

Write-offs. You always have the alternative of declaring technical bankruptcy. Throw out the project and start over. As in the financial world, though, the consequences of that decision aren’t trivial. You can lose a lot of credit with users and supporters during the interim when you don’t have a product. Furthermore, a redesign from the ground up is a lot more work than most people realize, and you have to make sure it’s done right. The worst possible scenario would be to spend millions of dollars, years of effort, and end up with only a newer, shinier pile of technical debt. The very fact that you’re considering that kind of drastic measure indicates strongly against your success: the bad habits that got you here have probably left thousands of critical system requirements completely undocumented. Good luck discovering those before you ship something to customers.

It’s not all bad. Strategic debt can leverage finances, and the same holds true in the technical world. Sometimes you need to get to market more quickly than you can do it the right way. So, you make a strategic decision to hack part of the system together, with a plan to go back later and redesign that portion. The key here is that you know that you’re incurring a debt, and it’s all part of a plan that won’t allow that debt to get out of control. It’s intentional, not accidental.

That’s the main benefit of using the technical debt metaphor: awareness. Too often, after a particularly bloody operation on a piece of unmaintainable code, a developer will approach his or her manager with “We really need to rewrite this module”, only to be brushed off with “Why? It’s working now, isn’t it?” Even if the developer possesses the debating skills necessary to point out that all subsequent changes to this code would benefit from taking some time now to refactor it, the manager would rather take chances on the future, because “we’ve got enough on our plate already”.

By framing the problem in terms of the debt metaphor, its unsustainability becomes clear. Most professionals can look at a balance sheet with growing liabilities and tell you that “somethin’s gotta change”. It isn’t always so apparent when you’re digging a similar hole technically.

Chip Camden has been programming since 1978, and he’s still not done. An independent consultant since 1991, Chip specializes in software development tools, languages, and migration to new technology.

10 things to look for in a data center

Everyone’s going to the cloud. The cloud’s all the rage. Almost no IT discussion is complete without mentioning “the cloud”. But when it comes down to it, the cloud is nothing more than systems hosting information in a data center somewhere “out there”.

Organizations have discovered the benefits of offloading infrastructure development, automatic failover engineering, and multiple coordinated power feeds, not to mention backups, OS maintenance, and physical security, to third-party data centers. That’s why “going to the cloud” ultimately makes sense.

Unfortunately, not every data center is ready for prime time. Some have sprung up as part of a cloud-based land grab. Review these 10 factors to ensure that your organization’s data center is up to the task.

1: Data capacity
Data centers are typically engineered to support mind-boggling data transmission capacities. Some feature multiple OCx and SONET connections that can manage Amazon.com-like Web site demands. Other less sophisticated entities might try getting by using redundant T-3s. Don’t find out the hard way that your data center provider failed to adequately forecast capacity and can’t quickly scale.

2: Redundant power
Many data centers have online electrical backups. UPSes, in other words. If your organization maintains business-critical systems that simply can’t go down, be sure that the data center has a second electrical backbone connection. Only N+1 power grid connectivity, to a secondary electrical source, can help protect against catastrophe.

3: Backup Internet
Just as any quality data center will maintain redundant power sources, so too must it maintain secondary and tertiary Internet connectivity. Buried cables get cut. Overhead cables fall when trucks strike poles. Vendors experience network-wide outages. Only by making sure that multiple tier-1 Internet provider circuits feed a facility via fully meshed backbones can IT managers rest assured they’ve done what they can to eliminate potential downtime.

4: Automatic hardware failover
Redundant power, Internet, and even heating and cooling systems are great, but if they’re not configured as hot online spares, downtime can still occur. It’s critical that data centers employ redundant online switches, routers, UPSes, and HVAC equipment that automatically fail over when trouble arises.

5: Access control
The importance of physical security can’t be understated. Commerce could be significantly affected if just one unstable individual were able to drive a large vehicle into a busy and sensitive data center. That’s why it’s important that a data center’s physical perimeter be properly protected. In addition to physical access controls (keys, scanner cards, biometric devices, etc.), care must be taken to ensure that, should someone gain access to a data center, individually leased sections remain secure (thanks to additional physical access controls, locks, cages, rooms, etc.).

6: 24x7x365 support
Data centers must be staffed and monitored by properly trained technicians and engineers at all times. It’s an unfortunate byproduct of today’s pressurized business environment but a fact nevertheless. Systems can’t fail. Constant monitoring and maintenance is a must. Certainly, many data centers will run leaner shifts during off hours, but telephone support and onsite assistance must be always available. Further, data center services must include customer reporting tools that assist clients in understanding a center’s network status.

7: Independent power
Data centers must have redundant electrical grid connections. That’s a given. And facilities must also maintain their own independent power supply. Most turn to onsite diesel generators, which need to be periodically tested to ensure that they can fulfill a data center’s electrical requirements in case of a natural disaster or episode that disrupts the site’s other electrical sources.

8: In-house break/fix service
One of the benefits of delegating services to the cloud is eliminating the need to maintain physical and virtualized servers. OS maintenance, security patching, and hardware support all become the responsibility of the data center. Even if an organization chooses to co-locate its own servers within a data center, the data center should provide in-house staff capable of maintaining software and responding to hardware crises.

9: Written SLAs
Any data center contract should come complete with a specifically worded service level agreement (SLA). The SLA should guarantee specific uptime, service response, bandwidth, and physical access protections, among other elements. Ensure, too, that the SLA or terms of service state what happens if a data center fails to provide uptime as stated, maintenance or service as scheduled, or crisis response within stated timeframes.

10: Financial stability
All the promises in the world, and even an incredibly compelling price, mean nothing if the data center fails. Before moving large amounts of data and equipment into a facility, do some homework on the company that owns the site. Confirm that it’s free and clear of lawsuits, has adequate operating capital, and isn’t in financial straits. The last thing you want to do is have to repeat the process because a center fails financially or must cut costs (and subsequently service and capacity) to stay afloat.

Erik Eckel owns and operates two technology companies. In addition to serving as a managing partner at Louisville Geek, which specializes in providing cost-effective technology solutions to small and midsize businesses, he also operates Eckel Media Corp.

Use a temporary default value to streamline data entry in Access

Microsoft Access


Use a temporary default value to streamline data entry in Access

There are a lot of opportunities for reducing data entry, but here’s one you might not have considered–entering temporary default values.

Doing so will reduce keystrokes when records share the same value such as the same zip code, the same city, the same customer, and so on, but that shared value changes from time to time.

This situation probably arises more than you realize. For instance, suppose a data entry operator enters orders processed by sales personnel who support specific ZIP codes, cities, or regions. The data entry operator knows that each order in a specific pile will have the same ZIP code, city, and so on.

Or, perhaps your data entry operator receives piles of work order forms from service managers who service only one company. In that case, every form in the pile will share the same customer value.

When a data entry operator enters several records with the same value one after the other, you can ease the data entry burden just a bit, by making that related value the default value for that field–temporarily. That way, the operator doesn’t have to re-enter the value for each new record–it’s already there!

Setting up this solution is easier than you might think–it takes just a bit of code in the control’s AfterUpdate event. Using the example form below, we’ll use this technique to create temporary defaults for three controls named txtCustomer, txtCreatedBy, and dteSubmittedDate. At the table level, the SubmittedDate field’s default value is Date(). (You can work with most any form, just be sure to update the control names accordingly.)

To add the event procedures for the three controls, open the form in Design View and then click the Code button in the Tools group to open the form’s module. Enter the following code:

Private Sub dteSubmittedDate_AfterUpdate(Cancel As Integer)
  'Set current date value to default value.
  dteSubmittedDate.DefaultValue = Chr(35) & dteSubmittedDate.Value & Chr(35)
  Debug.Print dteSubmittedDate.DefaultValue
End Sub
Private Sub txtCreatedBy_AfterUpdate(Cancel As Integer)
  'Set current value to default value.
  txtCreatedBy.DefaultValue = Chr(34) & txtCreatedBy.Value & Chr(34)
End Sub
Private Sub txtCustomer_AfterUpdate(Cancel As Integer)
  'Set current customer value to default value.
  txtCustomer.DefaultValue = Chr(34) & txtCustomer.Value & Chr(34)
End Sub

When you open the form in Form view, the AutoNumber field will display (New) and the Submitted Date control will display the current date. Enter a new record. Doing so will trigger the AfterUpdate events, which will use the values you enter as the default values for the corresponding controls:

  • ABC, International is now the default value for txtCustomer.
  • Susan Harkins is now the default value for the txtCreatedBy.
  • 1/20/2011 is now the default value for the dteSubmittedDate. The default value was generated by Date().

When you click the New Record button, the newly-set default values automatically fill in the controls. The only value the data entry operator has to enter is the service code.

That means that the data entry operator can bypass three controls for each new record until until a value changes. For instance, when the data entry operator moves on to the stack of order forms for RabbitTracks, he or she will update the Customer, CreatedBy, and SubmittedDate value for the first record in that batch. Doing so will reset the temporary-default values. That’s why this is such a useful technique for batch input–as the operator works through the pile of forms, the default values update to match the new input values, automatically.

It’s important to remember that this code updates the control’s default value property at the form level. This form-level setting takes precedence over a table-level equivalent. However, it does not overwrite the table property. If you delete the form-level setting, the table-level property kicks right in.

When you close the form, it saves the temporary default value. Consequently, when you next open the form, it will use the last set of default values. If you want the form to clear these properties from session to session, add the following code to the form’s module:

Private Sub Form_Open(Cancel As Integer)
  'Set Default Value properties to nothing.
  dteSubmittedDate.DefaultValue = vbNullString
  txtCreatedBy.DefaultValue = vbNullString
  txtCustomer.DefaultValue = vbNullString
End Sub

When you open the form, the Open event will clear the three previously-set default values. That means that txtCustomer and txtCreatedBy will be blank and dteSubmittedDate will display the current date (the result of Date(), the field’s table-level Default Value setting).

This technique might not seem like much to you. But, some users spend a lot of time entering data, so anything you can do to eliminate even a few steps will be a welcome enhancement.

Microsoft Word


Add line numbers to a Word document

It isn’t often that we need to number lines in a Word document, but the need does arise occasionally. For instance, developers and programmers often display line numbers with code.

Of course, you’re not writing code in a Word document, but you might insert code into the middle of a technical document. If you do, you might just want to include line numbers for that code. Regardless of why you want line numbers, the surprising fact is that Word will comply and without much effort on your part.

The easy part is enabling the feature, as follows:

2003 2007/2010
 

  1. From the File menu, choose Page Setup.
  2. Click the Layout tab.
  3. Click Line Numbering (at the bottom).
  4. Check the Add Line Numbering option.
  5. Check the appropriate options.

 

 

  1. Click the Page Layout tab.
  2. In the Page Setup group, click Line Numbers.
  3. Choose the appropriate option, such as Continuous.

 

There’s a little bit of version confusion, but it’s a small obstacle. By default, Word 2003 begins numbering at the beginning of the document and restarts numbering with each new page and of course, you can change those settings. In Word 2007 and 2010, enabling the feature includes specifying how to number each new page or section. It’s not so different, it only seems a bit different at first.

Once you add line numbers, you’re not stuck with them, not strictly speaking any way. To suppress line numbering (in Word 2003) for a section of text, right-click the selected text and choose Paragraph from the resulting context menu. Click the Line and Page Breaks tab and check the Suppress Line Numbers option and click OK.

Similarly, Word 2007 and 2010 let you suppress specified areas. Simply select the paragraph(s) in question and choose Suppress For Current Paragraph from the Line Numbers option in the Page Setup group.

By default, the feature begins with 1 and increments by one. You can start with a different number and you can change the increment value. To do so, change the options via the Line Numbers dialog, as follows:

2003 2007/2010
 

  1. From the File men, choose Page Setup.
  2. Click the Layout tab.
  3. Click Line Numbering (at the bottom).
 

  1. Click the Page Layout tab.
  2. In the Page Setup group, click Line Numbers.
  3. Choose Line Numbering Options.

To change the first number, edit the Start At value. The From Text option lets you determine the space between the number and the text. By changing the Count By value, you can change the increment value. The Numbering options are self-explanatory–you can restart numbering at the beginning of each new page or each new section.

You might never need this feature, but if the need arises, you’ll be able to say Yes! I can do that for you!

Microsoft Excel


Custom sorting in Excel

Sorting is a common task, but not all data conforms to the familiar ascending and descending rules. For example, months don’t sort in a meaningful way when sorted alphabetically. In this case, Excel offers a custom sort.

Before we look at a custom sort for months, let’s review the problem months presents to normal sorting practices. Below, you can see the problem. When applying an ascending sort, the list sorts alphabetically instead of sorting by month order.

If you want an alphabetic sort, this works great. I’m betting that most of the time, this won’t be the results you want. You could use an expression that returns a value equal to the order of each month and sort by its results,  but it’s unnecessary as there’s a built-in sort just for months. To apply this custom sort, do the following (in Excel 2003):

  1. Select the month names. In this case, that’s A2:A13.
  2. Choose Sort from the Data menu.
  3. The resulting dialog box anticipates the custom sort. The Sort By control displays Month with an Ascending sort. If you click OK,  Excel will sort the selected months in alphabetic order.
  4. Click the Options button at the bottom of the dialog box.
  5. In the resulting dialog box, the First Key Sort Order control displays Month. Click the dropdown arrow to display four custom sort options.
  6. Choose the last option, January, February, March, and so on. By default, a custom sort isn’t case-sensitive, but there’s an option to make it so, if you need it.
  7. Click OK twice and Excel sorts the months in the familiar way you expect.

Excel 2007 and 2010 offer the same flexible custom sort, but getting there’s a bit different:

  1. Click the Sort option in the Sort & Filter group. (Don’t click the A to Z or Z to A sort icons, the ones with the arrows.)
  2. In the resulting Sort dialog box, click the Order control’s dropdown list and choose the appropriate custom sort.
  3. Click OK.

When using a custom sort, the list doesn’t have to contain all of the sort elements to work. A list of just a few months will still sort by month order when applying the custom sort.

Why a lively imagination may bolster security more than best practices

Knowing how to protect yourself and your privacy depends on understanding the dangers and figuring out solutions to the problems that create those threats. Knowing how to protect yourself against a virus depends on knowing why a virus is dangerous in the first place, and having at least some vague understanding of how viruses work.

And knowing how to protect yourself from the ill effects of someone using your personally identifying information to commit identity fraud depends on knowing what information people want from you for that purpose, and how they get it.

Some people rely on others to protect them, hoping those others:

  1. know what they do not know, and do not want to know, about protecting themselves–without putting in the time to learn enough about the subject to be able to actually determine whether those supposed protectors are exaggerating their skills, or
  2. care enough about their security to actually do a diligent job of protecting it–more, in fact, than they themselves care, since they are not willing to do the work for themselves–rather than only caring about whether they can be sued for failures.

As should be obvious once you understand the requirements for success of the strategy of leaving your security up to others, real security is your responsibility. This does not mean it is your fault if you are the victim of some depraved malicious security cracker’s scam, but it does mean that you must take necessary steps to protect yourself, because ultimately nobody else is likely to do as much for you as you yourself can do.

Of course, the truth is that you really cannot know how you might be subjected to misappropriation of your personally identifying information and how to protect yourself against it, to take but one example of a potential security threat.

Such knowledge is not something that can be written down and disseminated to the world, because it is not a static body of knowledge. It is dynamic and ever-changing; the field of battle on which the innovations of an arms race are constantly tested, and regularly surpassed by new innovations.

Back in the early ’80s, DES (Data Encryption Standard) was widely regarded as uncrackable, and was considered “the answer” for protecting data against unauthorized access, but by today’s cryptographic standards it is laughably vulnerable.

Understanding security is not a matter of studying and memorizing a lot of facts. It requires not knowledge so much as a way of thinking that helps you consider the way a security system can be subverted, broken, or circumvented–and, based on that, the ways it can be improved, or that its deficiencies can be mitigated by careful use or the application of additional tools that patch the holes in the shield.

As demonstrated by the events described in Quantum Hacking cracks quantum crypto, the current biggest weakness in new quantum key exchange systems is not the methods of ensuring keys have not been harvested off the wire; it is the hardware deployment used to make the quantum key exchange work in the first place.

As explained in 10 (+1) reasons to treat network security like home security, the security provided by a lock is limited by the strength of the door the lock secures and the doorframe in which the door is mounted–and a combination of strong locks, doors, and doorframes is only as secure as the window a couple feet to the left.

Obsessive focus on the intended uses of a security feature leaves you open to the unexpected. Flexibility and imagination are often more important for ensuring security against malicious security crackers and other “enemies” than slavish devotion to “best practices”.

Yes, you may have antivirus and firewall software installed on your laptop, but that will not do you much good if someone steals it from the trunk of your car. Maybe encryption can protect your data even if someone steals the laptop, but if you do not keep backups on a separate computer that will not help you finish your Master’s thesis, as the unfortunate soul whose laptop was stolen found out.

When was the last time you considered the possibility that your computer may already be infected? Do you ever think, “Oh, it’ll be no problem to leave my desk without locking the screensaver just this once!”?

Do you want to be the guy who designed an RFID system for passports to help protect your country from terrorists, and did not think to consider whether detecting particular RFID signals from passports can be used by a radio receiver or a bomb can be used to detonate the device when someone with a passport from your country walks by?

Thinking “outside the box”, taking an imaginative and flexible approach to thinking about how processes and devices can be (ab)used for purposes for which they were not designed, can actually provide you with interesting ways to protect yourself as well as alert you to ways your security might be compromised.

Consider, for instance, the fact that guns can keep computers in your luggage safe. The fact that firearms in your luggage are treated differently from computers in your luggage, in terms of how you are allowed–or required–to transport them can actually be leveraged to ensure greater safety for your computers. It also happens to point out an important fact about TSA security requirements; you are not allowed to effectively secure your luggage against theft or vandalism except in specific, uncommon circumstances.

The upshot of all of this is that Albert Einstein was right when he said “Imagination is more important than knowledge.” Security is not really about what you know; it is about how you think.

Syncing time in Linux and Windows with NTP

There are plenty of reasons you should have your Linux and Windows servers set with the correct time. One of the most obvious (and annoying) is, without the correct time, your Linux machine will be unable to  connect to a Windows Domain.

You can also get into trouble with the configuration of your mail and web servers when the time is not correct (sending e-mail from the future is never a good idea). So how do you avoid this? Do you have to constantly be resetting the time on your machines? No.

Instead of using a manual configuration, you should set up all of your servers to use NTP (Network Time Protocol) so that they always have the correct time.

Windows Server settings There is a very simple way to set your Windows Server OS (2000 and later) to use an external time server. To do this simply click on this Fixit link and the registry entries necessary to be changed will be changed and your server will start updating time from an external source.

If you are more of the DIY Windows admin, you will want to know the registry edits that are made by clicking that Fix It link. Here they are (NOTE: The Windows registry is a tool that not all users are qualified to use. Make sure you do a backup of your registry before you make any changes.):

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters\Type

Right-click Type and select Modify. Change the entry in the Value Data box to NTP and click OK.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\AnnounceFlags

Right-click AnnounceFlags and select Modify. In the Edit D Word change the Value Data to 5 and click OK.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpServer

In the right pane, right-click Enabled and select Modify. In the Edit D Word change the Value Data to 1 and click OK.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters

In the right pane, right-click NtpServer and select Modify. Change the Value Data to Peers and click OK.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\SpecialPollInterval

In the right pane, right-click SpecialPollInterval and select Modify. Change the Value Data to Seconds (where Seconds is a number representing the amount of seconds between polls; 900 seconds is ideal) and click OK.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\MaxPosPhaseCorrection

In the right pane right-click MaxPosPhaseCorrection and select Modify. Change the Value Data to Seconds (where Seconds is a number representing the amount of seconds used for positive corrections; this is used to correct for time zones and other issues).

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\MaxNegPhaseCorrection

In the right pane, right-click MaxNegPhaseCorrection and select Modify. Change the Value Data to Seconds (where Seconds is a number representing the amount of seconds used for negative corrections; this is used to correct for time zones and other issues).

Once you have made the final registry edit, quit the registry editor and then click Start | Run and enter the following command:

net stop w32time && net start w32time

Your Windows machine will now start syncing time to an external server at the set intervals.

On to the Linux server In order to get NTP up and running you first have to install the ntp daemon on the machine. This is very simple, as ntpd will be located in your default repositories. So, with that in mind, open up a terminal window and issue one of the following commands (dependent upon which distribution you are using). NOTE: If you are using a non-sudo distribution you will need to first su to the root user. Once you have administrative privileges issue one of the following:

  • sudo apt-get install ntp (for Debian-based systems).
  • yum install ntp (for Red Hat-based systems).
  • urpmi ntp (For Mandriva-based systems).
  • zypper ntp (For SUSE-based systems)

Upon installation, your NTP system should be pre-configured correctly to use an NTP server for time. But if you want to change the server you use, you would need to edit your /etc/ntp.conf file. In this file you want to add (or edit) a line to reflect your NTP needs. An entry looks like:

SERVER_ADDRESS [OPTIONS]

Where SERVER_ADDRESS is the address of the server you want to use and [OPTIONS] are the available options. Of the available options, there are two that might be of interest to you:
  • iburst: Use this option when the configured server is unreachable. This option will send out bursts of eight packets instead of the default one when trying to reconnect to the server.
  • dynamic: Use this option if the NTP server is currently unreachable (but will be reachable at some point).

By default, the /etc/ntp.conf file will look similar to this:

server 0.debian.pool.ntp.org iburst dynamic

server 1.debian.pool.ntp.org iburst dynamic

server 2.debian.pool.ntp.org iburst dynamic

server 3.debian.pool.ntp.org iburst dynamic

More than one server is used in order to assure a connection. Should one server not be available, another one will pick up the duty.

When you have everything set up correctly, enter the following command:

sudo /etc/init.d/ntp start (on Debian-based machines) OR /etc/rc.d/init.d/ntp start (on most other machines. NOTE: You will need to first su to the root user for this command to work).

Your machine should now start syncing its time with the NTP server configured.

Final thoughts
It may seem like a task that should be unnecessary, but in certain systems and configurations, the precise time is crucial. Whether you are serving up web pages, mail, or trying to connect to a Windows domain, keeping the correct time will make just about ever task either easier or simply correct.

My first IronRuby application

In my continuing exploration of IronRuby, I was in search of a good opportunity to try writing a Ruby application from scratch that was interesting but not trivial. Fortunately, TechRepublic writer Chad Perrin posted a blog that fit the bill.

To summarize his post, he was looking for a good way to find out for a roll of multiple dice of the same size, how many combinations add up to each possible sum. For example, if you have five six-sided dice, what is the number of permutations that add up to the numbers five through 30? I decided that working on his problem would be the perfect opportunity for me to really explore IronRuby.

Yes, there are some mathematical approaches to this problem that eliminate the need for super fancy programming. But I’m not a math whiz, and implementing a three-line formula wasn’t going to help me learn Ruby or use the IronRuby environment. So I set about solving the problem myself, from scratch.

My first attempt at solving this was a bit too clever by half: I tried to construct a string of code using loops and then run it through eval(). This is the kind of thing I used to do in Perl all the time.

While this approach has merit, it felt like it was too much of a workaround. The final nail in the coffin for me was that I don’t know Ruby well enough to be able to write code that writes code and have the written code actually work. Debugging eval()’ed stuff can be a nightmare, in my experience. After about 30 minutes of frustration, I took a step back.

After writing to Chad about the problems I was having, I realized that I would have been better served by using a recursive function to write my code to eval(). The major challenge with this problem is that, while it can be solved with nested loops, the number of levels of nesting is unknown at the time of writing; this is what I was hoping to mitigate with my eval() approach.

As I sat down to write the recursive version of the code generator, a lightbulb went off in my head: “if I’m writing a recursive function, why not just solve it recursively?” So I did, and less than 30 minutes later (remember, I never wrote Ruby from scratch before), I had a working application.

Now, the code isn’t perfect. At the time of this writing, it isn’t creating nice output, and it isn’t calculating the percentages. These issues are easily solved. But for my first try at this problem, I am proud of the output. See the code sample below.

def calculate (iteration, low, high, currentsum, output)
if (iteration == 1)
low.upto(high) do |value|
newsum = currentsum + value
output[newsum] += 1
end
else
low.upto(high) do |value|
calculate(iteration - 1, low, high, currentsum + value, output)
end
end
return output
end
diceInput = ARGV[0].to_i
lowInput = ARGV[1].to_i
highInput = ARGV[2].to_i
if (diceInput < 1)
puts "You must use at least one dice."
return
end
initResults = Hash.new
(lowInput * diceInput).upto(highInput * diceInput) do |value|
initResults[value] = 0
end
results = calculate(diceInput, lowInput, highInput, 0, initResults)
results.each do |result|
puts "#{result}"
end
puts "Press Enter to quit..."
gets

My thoughts about IronRuby

While working on this solution, I got more familiar with IronRuby. To be frank, it needs some work in terms of its integration with the Visual Studio IDE. As a Ruby interpreter, it seems fine (I know it doesn’t get 100 percent on the Ruby compatibility tests), but the integration isn’t what I need.

For one thing, the “Quick Watches” do not work at all from what I can tell. Setting watches does not seem to work either. You can do value inspection via the “Locals” window, though. But it’s really unpleasant to see the options you really want but not to be able to use them.

The lack of IntelliSense isn’t a deal breaker, but it sure would be nice. No F1 help is pretty harsh, especially for someone like me who is not familiar with Ruby at all. It felt very old-school to be thumbing through my copy of The Ruby Programming Language while working!

I also found it rather interesting how Ruby handles variable typing. I’m so used to Perl, where a variable’s type is essentially determined by usage on a per-expression basis.

For example, you can assign a string literal that is composed only of numbers to a variable, and then perform mathematical operations on it. In Ruby, this doesn’t happen. Instead, if I assign a string literal to a variable, it functions as a string until I assign something of a different type to that variable. While this is perfectly sensible, it went against the grain of my way of thinking. Once I got a handle on this, my work in Ruby went a lot more smoothly.

Summary
I’m certainly no Ruby expert, but at this stage in the game, I feel like it is a language that I want to continue using in my career. Ruby has a lot to offer in terms of expressiveness. Soon, I will explore its use in Windows Phone 7 applications and take a look at how it interoperates with the .NET Framework.

Where’s the Number of Pages option in Word 2007 and 2010?

Microsoft Word


Where’s the Number of Pages option in Word 2007 and 2010?

Inserting the page number in earlier versions of Word is simple. You open the header or footer and click the appropriate options on the Header and Footer toolbar. The Page Numbers option is also available from the Insert menu.

It’s still easy in Word 2007 and 2010, but finding the options might complicate the task just a bit.

To insert a page number at any time in a Word 2007 or 2010 document, click the Insert tab and then click Page Number in the Header & Footer group. Although this option is in the Header & Footer group, you can select Current Position to insert a literal page number almost anywhere in a document; you’re not limited to the header and footer sections. That’s the easy part.

You can also click the Insert tab and click Header or Footer from the Header & Footer group. The resulting gallery will offer a number of pre-defined page numbering options, although none of them offer the Page x of y format.

If you want to insert page numbers via fields, use Quick Parts. You’ll find this option in the Text group, also on the Insert tab. From the Quick Parts dropdown list, choose Field.

After selecting Field, specify one of the page numbering fields: Page and NumPages. You can combine these to create the form Page x of y. This process is probably familiar to you, but finding it via Quick Parts is new to Word 2007 and 2010. It’s probably an appropriate spot, but it might not be the first place you look.

Displaying page numbers in Word 2007 and 2010 is still easy. The Number of Pages option isn’t hidden, but you might have trouble finding it.

Microsoft Excel


Restrict duplicate data using Excel Validation

Excel sheets accept duplicate values of course, but that doesn’t mean you’ll always want to allow them. There are times when you’ll not want to repeat a value. Instead of entering a new row (or record), you’ll want the user to update existing data. You can train users, but that doesn’t mean they’ll comply. They’ll try to, but specific rules are easy to forget, especially if updates are infrequent. The easiest way to protect a sheet from duplicate values is to apply a validation rule. If a user tries to enter a duplicate value, the appropriate validation rule will reject the input value and (usually) provide helpful information as to what the user should do next.

For example, let’s suppose users track hours worked using the sheet shown below. You want each worked date (column A) entered just once–there’s no signing in and out for lunch or other activities. (This setup would be too restrictive for most situations, but it sets up the technique nicely.) Realistically, a user could easily see that they’re re-entering an existing date, but in a sheet with a lot of data, that wouldn’t be the case. At any rate, there’s nothing to stop the user from entering the same date twice.

To apply a validation rule that restricts input values to only unique values, do the following:

  1. Select A2:A8 (the cells you’re applying the rule to).
  2. Choose Validation from the Data menu and click the Settings tab. In Excel 2007/2010, click the Data tab and choose Data Validation from the Data Validation dropdown in the Data Tools group.
  3. Choose Custom from the Allow dropdown list.
  4. The Custom option requires a formula that returns True or False. In the Formula field, enter the following expression: =COUNTIF($A$2:$A$8,A2)=1.
  5. Click the Error Alert tab and enter an error message.
  6. Click OK.

Once you set the rule in place, users must enter unique date values in A2:A8. As you can see below, Excel rejects a duplicate date value, displays a simple explanation, and tells the user what to do next–click Retry and enter a unique date.

This particular validation formula accepts any value, it just won’t accept a duplicate value. The cell format is set to Date, which restricts entry to date values. You can use this formula to restrict any type of data, not just date values.

Microsoft Office


Disable printer notification in the Windows System Tray

When you send something to the printer, Windows displays a small balloon from the System Tray that identifies the document you’re printing, and that includes Office documents. I find this notification annoying.

Fortunately, you can disable this annoying feature, as follows (in Windows XP):

  1. Click the (Windows) Start menu and choose Printers and Faxes.
  2. In the Printers and Faxes window, choose Print Server Properties from the File menu. In Windows 7, Print Server Properties in on the window’s toolbar.
  3. Click the Advanced tab.
  4. Clear the Show Informational Notifications For Local Printers option.
  5. Clear the Show Informational Notifications for Network Printers option (if applicable).
  6. Click OK.
  7. Close the Printers and Faxes window.

Depending on your system’s configuration, you might have to disable both local and network printers.

You’re probably wondering if this feature really annoys me as much as I say or if I’m just employing a clever writing device to make this entry more interesting. Honestly, it’s an annoying interruption I can definitely live without.

Of course, to more patient folk, this interruption seems insignificant. After all, it doesn’t keep you from working and it will disappear on its own, eventually. Most people will ignore it, but it diverts my attention, even after all this time. If it annoys you as much as it annoys me, you’ll appreciate knowing how to disable it!

5 tips for deciding whether to virtualize a server

Even though server virtualization is all the rage these days, some servers simply aren’t good candidates for virtualization. Before you virtualize a server, you need to think about several things.

Here are a few tips that will help you determine whether it makes sense to virtualize a physical server.

1: Take a hardware inventory
If you’re thinking about virtualizing one of your physical servers, I recommend that you begin by performing a hardware inventory of the server. You need to find out up front whether the server has any specialized hardware that can’t be replicated in the virtual world.

Here’s a classic example of this: Many years ago, some software publishers used hardware dongles as copy-protection devices. In most cases, these doubles plugged into parallel ports, which do not even exist on modern servers. If you have a server running a legacy application that depends on such a copy-protection device, you probably won’t be able to virtualize that server.

The same thing goes for servers that are running applications that require USB devices. Most virtualization platforms will not allow virtual machines to utilize USB devices, which would be a big problem for an application that depends on one.

2. Take a software inventory
You should also take a full software inventory of the server before attempting to virtualize it. In a virtualized environment, all the virtual servers run on a host server. This host server has a finite pool of hardware resources that must be shared among all the virtual machines that are running on the server as well as by the host operating system.

That being the case, you need to know what software is present on the server so that you can determine what system resources that software requires. Remember, an application’s minimum system requirements do not change just because the application is suddenly running on virtual hardware. You still have to provide the server with the same hardware resources it would require if it were running on a physical box.

3. Benchmark the system’s performance
If you are reasonably sure that you’re going to be able to virtualize the server in question, you need to benchmark the system’s performance. After it has been virtualized, the users will be expecting the server to perform at least as well as it does now.

The only way you can objectively compare the server’s post-virtualization performance against the performance that was being delivered when the server was running on a dedicated physical box is to use the Performance Monitor to benchmark the system’s performance both before and after the server has been virtualized. It’s also a good idea to avoid over-allocating resources on the host server so that you can allocate more resources to a virtual server if its performance comes up short.

4. Check the support policy
Before you virtualize a server, check the support policy for all the software that is running on the server. Some software publishers do not support running certain applications on virtual hardware.

Microsoft Exchange is one example of this. Microsoft does not support running the Unified Messaging role in Exchange Server 2007 or Exchange Server 2010 on a virtual server. It doesn’t support running Exchange Server 2003 on virtual hardware, either.

I have to admit that I have run Exchange Server 2003 and the Exchange Server 2007 Unified Messaging role on a virtual server in a lab environment, and that seems to work fine. Even so, I would never do this in a production environment because you never want to run a configuration on a production server that puts the server into an unsupported state.

5. Perform a trial virtualization
Finally, I recommend performing a trial virtualization. Make a full backup of the server you’re planning to virtualize and restore the backup to a host server that’s running in an isolated lab environment. That way, you can get a feel for any issues you may encounter when you virtualize the server for real.

Although setting up such a lab environment sounds simple, you may also have to perform a trial virtualization of some of your other servers. For example, you might need a domain controller and a DNS server in your lab environment before you can even test whether the server you’re thinking about virtualizing functions properly in a virtual server environment.

Brien Posey is a seven-time Microsoft MVP. He has written thousands of articles and written or contributed to dozens of books on a variety of IT subjects.

Take the ‘policy’ out of IT

Reading the admonishments of the IT “establishment”, one could be excused for thinking we were becoming politicians or diplomats.

According to the pundits, each new technology and innovation requires a raft of overwrought “policy” documents. Whether it’s social media, cloud computing, or boring old desktop usage, apparently the ultimate expression of IT value is producing a multichapter treatise of do’s and don’ts that will likely be immediately filed in the bin by those who have actual work to do at your company.

The butt of most corporate jokes, our friends in HR, are another business unit historically mired in policy and in too many cases blind to its actual benefits to the company (or lack thereof).

Think of the last time you received a series of e-mail blasts addressed to every employee of your company, heralding the arrival of a new HR policy with the breathless zeal usually reserved for the latest teen celebrity. Was your reaction to drop everything you were doing, click the “refresh” button with bated breath until the newest HR policy appeared on the screen, and read every line with unreserved zeal?

If you are like most normal workers, you are overloaded with work, and if you expend more than eight seconds of consideration on a new HR policy, you are probably 100 percent more diligent than your peers. IT policies are greeted with similar distain and perhaps even less enthusiasm than HR policies simply because HR is the most visible entity in getting paychecks out the door.

Rather than rushing to sign a raft of consultants to a six-figure engagement to develop the perfect IT policy, consider the following.

Treat your employees like adults until proven otherwise
Unless you have reason to suspect otherwise, you can safely treat your employees like adults. Certainly there is some percentage of them who will run an imaginary farm or mafia family during business hours, but more than likely that same demographic is sneaking a peek at their Blackberry or answering a business-related phone call in the off hours. Consider for a moment that these people are likely intelligent enough to realize that Mafia Wars is not work-related, so is a 50-page policy document from IT really going to change this behavior?

In most companies, people are regularly entrusted with million-dollar decisions and are usually able to manage these responsibilities quite capably without a policy document. Apply the same basic logic to your IT resources. Expect your people to make the right decisions without unwieldy lists of “don’ts.”

Just as when someone makes an inappropriate business decision or steals company resources and they are appropriately punished, educate and reprimand those who misuse IT resources without treating the rest of your staff like children.

Help staff use new tools appropriately
Rather than trying to craft a manifesto, work with interested parties to demonstrate new technologies or educate staff when a publicly available technology might be inappropriate for corporate use. Spend an afternoon with the marketing folks explaining the latest presence-based social media tools, and IT becomes a trusted advisor rather than the draconian “Facebook police”.

Should you see a Web-based technology that poses a definite risk to information security, educate staff on the risk and provide an alternative. Perhaps you don’t want employees putting sensitive internal information on a cloud-based storage site; if you can explain the risks in nontechnical terms and provide a reasonable alternative, most employees are willing to work with you and even offer suggestions on how IT might be able to meet a business need. If you block the latest service, you’ll spend years playing cat and mouse as users thwart each new block you put in place.

Policies make you look silly
One of the most overlooked points is that overwrought policy documents make IT look silly. Most CIOs are clamoring for the illusive concept of “IT alignment”, where IT is perceived as an integral part of the business rather than a cadre of internal order takers. The whole concept of extensive policy documents makes IT seem out of touch.

If you can intelligently summarize the risks and associated benefits of new technologies to your executive peers, you can jointly develop a strategy for monitoring and mitigating the risks and promoting and leveraging the benefits. This can and should be a sidebar discussion to IT’s other activities. When producing policies is the crowning achievement of an IT organization, it looks all the more compelling to outsource IT.

Patrick Gray is the founder and president of Prevoyance Group, and author of Breakthrough IT: Supercharging Organizational Value through Technology. Prevoyance Group provides strategic IT consulting services to Fortune 500 and 1000 companies.

5 easily overlooked desktop security tips

The desktop computer is the heart of business. It is, after all, where business gets done. But so much effort goes into securing our servers (and with good reason), that often the desktops are overlooked.

But that does not need to be the case. Outside your standard antivirus/anti-malware/firewall, there are ways of securing desktops that many users and techs might not think about. Let’s take a look.

1: Patch that OS
Although many updates occur for feature-bloat, some updates do in fact happen for security reasons. One of the first things you should do, prior to deploying a desktop, is apply all the patches available for it.

Do not deploy a desktop that has known, gaping security holes. If you are deploying a desktop that has not been fully updated, it will be vulnerable from the start. And this tip applies to all platforms, not just Windows.

2: Turn off file sharing
Those who must share files can ignore this tip. But if you have no need to share files on your desktop, you should turn this feature off.

For Windows XP, click Control Panel | Network Connections | Local Area Connection Properties. From that window deselect File And Printer Sharing, and you’re good. In Windows 7, open the Control Panel and then go to the Network And Sharing Center. Now click Change Advanced Sharing Settings in the left pane. From this new window, expand the network where you want to disable sharing and select Turn Off File And Printer Sharing. Done.

3: Disable guest accounts/delete unused accounts
Guest accounts can lead to trouble. This is especially true because so many users leave guest accounts without password protection. This might not seem like a problem, since the guest user has such limited access. But giving access to a guest user creates a security risk. You are much better off disabling the guest account.

The same goes for unused accounts. This is such a common mistake. Machines get passed around from user to user in many businesses, and the old users do not get deleted. Don’t let this happen to you. Make sure the users on your system are actually active and need access to the machine. Otherwise, you have yet another security hole.

4: Employ a strong password policy
This should go without saying. SHOULD. But how many times do you come across the word “password” as a password? Do not allow your users to make use of simple passwords. If the password can be guessed with little effort, that password should never be used.

This can be set in server policies. But if you don’t take advantage of policies, you will have to enforce this on a per-user basis. Do not take this lightly. Weak passwords are one of the first ways a machine is compromised.

5: Mark personal folders/files private
You can enable folder/file sharing on a machine but still have private folders. This is especially important for personal information. Some businesses might not allow personal files to be saved on desktop machines, but that’s a rarity. If you work in a company that allows you to house personal data, you probably won’t want your fellow employees to have access to it.

The how-to on this will vary from platform to platform (and is made even more complex by the various editions of the Windows platform). But basically, you change the security permissions on a folder so only the user has access to the folder. To do this, right-click on the folder and select Properties. From within the Properties window, go to Security and edit the permissions to restrict access to just the user.

When ‘open source’ software isn’t truly open source

Richard Stallman may have kick-started the Free Software and open source software community, but the Open Source Initiative was founded in 1998 by Bruce Perens and Eric Raymond to offer an alternative approach to defining and promoting the same software development processes and licenses, and that approach has gotten the lion’s share of public recognition since.

Free Software is a term that both promotes Stallman’s ideological goals regarding how software is distributed, thus turning off business-oriented software users who disagree with Stallman’s ideology, and manages to conflate itself with software that simply doesn’t cost anything.

Perens and Raymond coined the term “open source software” to refer to software developed under essentially the same conditions as Stallman’s Free Software definition.

Once the term “open source software” was coined, it was also defined. The official Open Source Definition is clear, and explains how software must be licensed to qualify as open source software.

The specific points of the Definition address issues related to:

  1. Redistribution
  2. Source code
  3. Derived works
  4. Integrity of the author’s source code
  5. No discrimination against persons or groups
  6. No discrimination against fields of endeavor
  7. Distribution of license
  8. License product specificity
  9. License restrictions on other software
  10. License technology neutrality

A summary of the effect of the conditions mandated by that Definition is available in the Wikipedia article about open source software:

Open source software (OSS) is computer software that is available in source code form for which the source code and certain other rights normally reserved for copyright holders are provided under a software license that permits users to study, change, and improve the software.

Unfortunately, the fact is that many people misuse the term “open source software” when referring to their own software. In the process of looking for a decent dice roller IRC bot in 2010, I came across one called Bones. On the announcement page for it, Bones: Ruby IRC Dicebot, its author said the dicebot is:

Free: Like most IRC bots, Bones is open source and released free of charge.

In subsequent e-mail discussion with its author, it turned out that his definition of “open source” is substantially different from that of the Open Source Initiative, me personally, and the entire open source community–to say nothing of Microsoft, Oracle, tech journalists, and just about everybody else who uses the term:

Question: You said in your page for bones that “Like most IRC bots, Bones is open source and released free of charge.”  What open source license are you using for it?

I haven’t released it under any particular open source license. It’s only open source in so far as it isn’t closed source.

In an attempt to clarify the legal standing of the IRC bot, I asked further:

Any chance I could get you to let me do stuff with it under a copyfree license (my preference is the Open Works license, though BSD and MIT/X11 licenses or public domain are great too) so I can hack on yours a bit rather than just having to start over from scratch?

I don’t plan on releasing it under a license, but that shouldn’t stop you from making changes to the code if you like.

Of course, if I have the source in my possession–which is pretty much a given for any Ruby program–I can indeed make changes to it if I like. The point he ignored is that without a license setting out what permissions the copyright holder grants to recipients of the code, these recipients cannot legally share changes with others.

This effectively made any interest I had in improving his program dry up and blow away. It also effectively means that when he called it “open source”, he made an error either of ignorance or of deception. When I pointed out to him the legal problems involved, he declined to respond.

There is some argument to be made as to whether a license that conforms to the requirements of the Open Source Definition should be called an “open source” license even if it has not been certified by the OSI itself. Many of us are inclined to regard a license as an open source license if it obviously fits the definition, regardless of certification.

By that standard, the Open Works License and WTFPL (Note: the full name of the WTFPL may not be safe for reading at work, depending on your workplace; be careful clicking on that link) are open source licenses.

By the standards of the list of OSI approved licenses, however, they are not–because the OSI requires an extensive review process that lies somewhat outside the range of what many would-be license submitters have the time and resources to pursue.

Let us for argument’s sake accept that merely conforming to the Open Source Definition is sufficient to call a license “open source”, regardless of official approval by the OSI. By contrast with the Bones IRC bot, then, an IRC bot called drollbot (part of the larger droll project) that I wrote from scratch to serve much the same purpose as Bones actually is open source software, released under the terms of the Open Works License.

The simple comparison of Bones with drollbot serves to illustrate the difference between what really is open source software and what only pretends to be. The pretense, in this case, is an example of something many people call “source-available software”, where the source code is available but recipients are granted no clear legal permission to modify, redistribute, and even sell the software if they so desire–requirements of both the Open Source Definition and the Free Software Foundation’s definition of Free Software.

There are many other concerns related to how we classify software and the licenses under which we distribute it, but many of them are secondary to the simple necessity of understanding what is or is not open source software at its most basic level. Whether or not you consider a piece of software to qualify as “open source software” if the license under which it is distributed has not been officially approved by the OSI, one thing is clear. That is the fact that before you go around telling people to download your “open source software”, you should give them assurances of the most basic requirement of open source software as differentiated from software that has merely been written in a language traditionally run by an interpreter rather than compiled to an executable binary:

When you call something “open source software”, you must give all recipients a guarantee that they may modify and redistribute the software without fear of lawsuits for copyright violation. If you do not do that, by way of a license that conforms to the Open Source Definition or by releasing the software into the public domain, what you give them is not open source software. Period.

Memoize recursive functions to conserve resources

Memoization is a form of caching that is used to reduce duplication of effort by your program. In short, it is a means of caching results so that when generating large data sets the same results do not need to be recalculated as part of an otherwise elegant algorithm.

The most common use of memoization is in recursion. Most programmers are aware of recursion; some even understand and use it. The majority of modern functional programming languages provide tail call optimization, but those languages that do not (usually object oriented or merely procedural) include some of the most widely used programming languages.

Tail call optimization is typically considered in terms of an interpreter or compiler that optimizes a recursive function so that it will not perform the same operations over and over again. In languages that lack tail call optimization, a similar effect is achieved through memoization.

An example of an algorithm that could benefit greatly from tail call optimization or memoization is the recursive definition of a Fibonacci number:

F(0) = 0
F(1) = 1
F(n > 1) = F(n-1) + F(n-2)

This is a prime example of the importance of optimization in programming. The simplistic recursive source code for this in Ruby would look something like this:

def fib(n)
  return n if [0,1].include? n
  fib(n-1) + fib(n-2)
end

Unfortunately, the most widely used production version of Ruby today, Ruby 1.8.7, does not support tail call optimization (look for it in Ruby 1.9+). If n = 4 in the above code, it ends up being calculated like this:

fib(4)
(4 - 1)
    + (4 - 2)
(((4 - 1) - 1) + ((4 - 1) - 2))
    + (((4 - 2) - 1) + ((4 - 2) - 2))
((((4 - 1) - 1) - 1) + (((4 - 1) - 1) - 2)) + (3 - 2)
    + (2 - 1) + (2 - 2)
(((3 - 1) - 1) + ((3 - 1) - 2)) + 1
    + 1 + 0
((2 - 1) + (2 - 2) + 1
    + 1
(1 + 0) + 1
    + 1
1 + 1
    + 1
2
    + 1
3

That’s a lot of effort just to get the number three. The problem is that, until the numbers start getting down to 0 or 1, every operation requires two sub-operations. With a high enough Fibonacci number to start the process, the number of operations required gets absolutely insane. Using the Ruby REPL, you can see that the fourth Fibonacci number requires nine calls to the fib() method, and the fifth Fibonacci number requires 15 calls:

> irb
irb(main):001:0> $opcount = 0
=> 0
irb(main):002:0> def fib(n)
irb(main):003:1>   $opcount += 1
irb(main):004:1>   return n if [0,1].include? n
irb(main):005:1>   fib(n-1) + fib(n-2)
irb(main):006:1> end
=> nil
irb(main):007:0> fib 4
=> 3
irb(main):008:0> $opcount
=> 9
irb(main):009:0> $opcount = 0
=> 0
irb(main):010:0> fib 5
=> 5
irb(main):011:0> $opcount
=> 15

By the time you get to fib 20, you have 21,891 calls to fib(). fib 30 took more than 10 seconds to complete in a test run, and 2,692,537 calls to fib().

Memoization greatly reduces the number of such operations that must be performed, by only requiring each Fibonacci number to be calculated once.

For the simple version, start by creating an array to hold already calculated numbers. For each calculation, store the result in that array. For each time a recursive function would normally calculate one of those numbers, check to see if the number is stored in your array; if so, use that, and if not, calculate and store it. As a jump-start to the array, set the first two elements to 0 and 1, respectively.

In Ruby, a Fibonacci number generator might be modified to look like this:

$fibno = [0,1]
def fib(n)
  return n if $fibno.include? n
  ($fibno[n-1] ||= fib(n-1)) + ($fibno[n-2] ||= fib(n-2))
end

By caching values as you go so that fewer recursive calls are needed, you can get the result of fib 30 pretty much instantaneously. The total number of recursive calls to fib() is reduced from 2,692,537 to a mere 29. In fact, the number of calls to fib() increases linearly, so that the number of calls is always equal to the ordinal value of the Fibonacci number you want minus one. That is, fib 50 makes 49 calls to fib(), and fib 100 makes 99 calls to fib().

That assumes you reset $fibno every time. You can leave it alone, and reduce the number of calls to fib() even more on subsequent calls. For instance, try fib 100 with $fibno = [0,1], and 99 calls to fib() will be made. Try fib 40 without resetting $fibno, though, and only one call to fib() will be made, because $fibno already contains the appropriate value.

You can also use a somewhat simpler approach to caching than the above example. Instead of the number of calls to fib() only increasing by one for each increase in the ordinal Fibonacci value, it increases by two, resulting in 59 operations instead of 29 for fib 30:

$fibno = [0,1]
def fib(n)
  $fibno[n] ||= fib(n-1) + fib(n-2)
end

Similar caching mechanisms can be used to achieve similar effects in other languages that do not optimize tail calls, such as C, Java, and Perl. In fact, in Perl this caching idiom has a special name: the Orcish Maneuver. The name comes from the or-equals operator, ||=, which can be pronounced “or cache” in cases such as memoization.

Say “or cache” very quickly, and you get the name of something that bears the stamp of a favorite fantasy monster. Perhaps this is how an Orc would optimize a recursive function, after all.

In Perl, the term Orcish Maneuver is typically applied to sorting functions rather than recursive series generation functions as in the case of memoizing a Fibonacci number generator. The canonical Perl example of the Orcish Maneuver looks something like this:

my @words = (
  'four',
  'one',
  'three',
  'two'
);

my %numbers = (
  'one' => 1,
  'two' => 2,
  'three' => 3,
  'four' => 4,
);

my @numerically_sorted_words;

sub str2n {
  my $key = shift;
  $numbers{$key};
}

{ my %cache;
  @numerically_sorted_words = sort {
    ($cache{$a} ||= str2n($a)) <=> ($cache{$b} ||= str2n($b));
  } @words;
}

foreach (@numerically_sorted_words) { print "$_\n"; }

A probably more useful application of Perl’s Orcish Maneuver would be for month names, but this example at least shows how the maneuver is used.

5 tips for building a successful global IT workforce

Successful managers agree: The strength of an organization’s IT talent pool is a critical component to building and growing a successful company.

And today’s global environment makes it possible for an organization to build its workforce without restrictive geographical boundaries. An organization can pull talent and resources from around the world to build the strongest and most efficient team possible.

As a result, it is important for organizations to develop a systematic approach for recruiting and maintaining talent on a global level. In addition, organizations must implement strategies to optimize and harness the global IT service talent that best meets their IT service business requirements.

The following five tips will help CIOs and executives recruit global IT talent more effectively to ensure their organization’s workforce is built for success.

1: Set objectives but allow for flexibility in roles
Setting goals and objectives is necessary when determining roles and responsibilities in an organization; however, it is also important to maintain flexibility to make room for individuals’ unique skills and experiences.

Maintaining a capable IT workforce is most effective when role requirements are clearly defined while still being flexible enough to incorporate the broad range and the scope of skills and talents available. For example, executives may consider redesigning service technicians’ roles so an employee’s unique skill set can shine through.

By remaining flexible, it is easier to ensure organizational culture will support a diverse group of employees who thrive by playing up their strengths and following their instincts.

2: Recruit and promote from within
Companies should work to identify internal resources to develop and grow their workforce. Many managers have found that their company’s most valuable resources lie inside the organization. Given the right training and support, internal candidates are put in a position to perform a broader variety of tasks, a particularly vital capability within the IT service industry where technologies and processes are constantly evolving.

3: Hire for innate talents and be willing to invest in training
Most executives trying to build a successful organization understand that it’s important to find a balance between innate abilities and specific experience or qualifications when looking for IT talent. Often, hiring managers find a candidate who may have the right personality, experience, and problem-solving skills but may lack a particular certification or technical skill set.

When it comes to pooling talent and building an IT workforce, decision makers need to understand that even though certain skills can be taught, the innate abilities and attitude of a prospective hire can’t be instilled with training. When a candidate with the right personality becomes available–even if he or she lacks a certain skill set–it helps to define the skills required for the position and determine whether certification gaps can be successfully achieved through training.

4: Build a candidate pipeline
To maintain the most efficient and well-balanced IT workforce, it’s essential to be consistently on the lookout for talent. This is even more important as talent pools continue to globalize, resulting in larger and more diverse candidate pools. Having a candidate pipeline will reduce the likelihood an organization will be caught off guard or unprepared when a position opens up.

Managers should ensure that they are never making hasty decisions or missing opportunities for talent. Examine your organization’s business plan and try to anticipate future needs, including geographical expansions or relocations. Network and nurture relationships in an effort to recruit talent that aligns with your organization’s future needs and direction.

5: Diversify
The most diverse organizations tend to be those that are flexible and strategic about recruiting and maintaining IT talent. Executives at these organizations understand the value of diversity in experience, perspective, and skill when building a workforce.

This is even more important when building a global IT workforce, as there is a greater opportunity to connect with individuals with a wide variety of skill sets. By leveraging the existing talent pool, nurturing global networks, and investing in diversity, organizations can effectively mine new IT service talent sources, build multicultural talent, and be well positioned for market success.

Summary
In today’s global environment, decision makers must consider business goals and the direction of the organization when mining talent and managing an IT workforce. By prioritizing company needs, being open to diversity and unique skill sets, and valuing the talent that already exists internally, executives are more apt to employ a workforce aligned with company values–one that plays a strategic role in a company’s ability to develop new services and expand into new markets.

By focusing on having the right resources to identify new sources of talent, optimizing the IT talent pool and building a strong talent pipeline, organizations can gain access to skills that support business goals, build bench strength and recruit effectively to enhance competitive advantage.

Jay Patel is director of professional services for Europe, Middle East, and Africa at Worldwide TechServices.

Configure a time server for Active Directory domain controllers

Time management is one of the more critical aspects of system administration. Administrators frequently rely on Active Directory to sync time from client servers and workstations to the domain. But where does Active Directory get its time configuration?

Well, that depends on various factors. Default installations may go directly to Microsoft, and virtual machines may set themselves to update to the host servers.

The best way to ensure the time is accurate on a consistent basis is to establish one authoritative time source for your organization. An authoritative time source is the time server(s) that all systems on your network trust as having the accurate time.

The source can be an Internet time server or the pool, or it can be something you fully administer internally. Regardless, a designated authoritative time source for a given organization should be determined ahead of time.

From there, you can configure Active Directory domain controllers with the PDC emulator role in a domain to use this list of servers explicitly for their time. Read this TechNet article to learn how the time service operates within a forest. The main takeaway is the w32tm command is used to set a list of peers for specifying where time is sourced for a domain.

The command snippet below sets the time peer to an Internet NTP server:

w32tm /config /manualpeerlist:”nist.expertsmi.com” /syncfromflags:manual /reliable:yes /update

If you want to put in a pool of servers, they can be separated by a space. When executed on a domain controller, it executes once and is reflected in the registry. Figure A shows this on a sample domain controller.

Figure A

Click the image to enlarge.

I recommend applying this configuration to all domain controllers and possibly even making it a Group Policy object as a startup script for the \Domain Controllers organization unit within Active Directory.

This tip applies to current Windows Server technologies, though not much has changed over the years with regard to this topic. See what I mean by reading this tip by Mike Mullins posted in February 2006: Synchronize time throughout your entire Windows network.

What you need to know about OpenSSH key management

OpenSSH is one of my favourite tools, and one I take for granted because I use it all day, every day.

It is a Swiss Army-knife of coolness that can be used to provide secure connections to insecure sites in insecure places like free Wi-Fi-offering coffee shops. OpenSSH can be used to remotely administer systems, provide encrypted file sharing via sshfs, bypass draconian corporate firewall policies (okay, maybe that isn’t the best example of OpenSSH coolness), and a whole lot more.

Before you’re really able to appreciate all that OpenSSH has to offer, you have to learn the basics, and that means key management. So we’re going to look at how to manage and use OpenSSH public/private keypairs.

Generating OpenSSH keys is easy, and doing so allows for passphrase-based keys to be used for login authentication instead of providing your password. This means you have the private key stored locally, and the public key is stored remotely. The two keys together form a cryptographically secure keypair used to perform authentication, without sending a password over the network.

To generate an RSA2 key (default 2048 bits) with a special comment to identify its use and saved to ~/.ssh/server1_rsa and ~/.ssh/server1_rsa.pub, use:

$ ssh-keygen -C "special key for server foo" -t rsa -f ~/.ssh/server1_rsa
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/joe/.ssh/server1_rsa.
Your public key has been saved in /home/joe/.ssh/server1_rsa.pub.
The key fingerprint is:
fb:8a:23:82:b9:96:a1:9c:d5:62:58:15:9a:8f:f9:ed special key for server1
The key's randomart image is:
+--[ RSA 2048]----+
|     ..          |
|    o.           |
|   o.            |
|   .+            |
|  oo..  S        |
| o +...  .       |
|oo* .. ..        |
|+=. . o. .       |
|o. . ..E...      |
+-----------------+

Keeping this key to yourself isn’t useful, so it needs to be copied to a remote server where it will be used. You can do this manually by copying it over and then moving it into place, or you can use the ssh-copy-id command:

$ ssh-copy-id -i ~/.ssh/server1_rsa joe@server1.domain.com

Once you provide the account password, the ~/.ssh/server1_rsa’s public key will be copied to the remote server and placed into ~/.ssh/authorized_keys. You should then be able to log in using the key, and its passphrase, from that point forward.

Using the ~/.ssh/config file can really make life easier. With that configuration file, you can easily setup various options for different hosts. If you wanted to have multiple SSH public/private keypairs, and want to use a specific keypair for a specific host, using ~/.ssh/config to define it will save some typing. For instance:

Host server1 server1.domain.com  Hostname server1.domain.com  User joe
  IdentityFile ~/.ssh/server1_rsa  Host server2 server2.domain.com
  Hostname server2.domain.com  User root  IdentityFile ~/.ssh/server2_rsa

In this example, when you do ssh server1, it will connect to server1.domain.com using the private key in ~/.ssh/server1_rsa, logging in as “joe”. Likewise, when connecting to server2.domain.com, the ~/.ssh/server2_rsa key is used and you will connect as the root user.

If you have changed the remote server’s SSH key (either by installing a new operating system, re-using an old IP address, changing the server keys, whatever), and you have strict key checking enabled (usually a default), you may see a message like this:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!

Warnings like this should be taken seriously. If you don’t know why the key has changed, before making any changes and assuming it’s benign, or even completing the login, find out from the administrator of the box. If you know this is an expected change, then use the ssh-keygen tool to remove all keys that belong to that particular host from the known_hosts file (as there may be more than one entry for the host):

$ ssh-keygen -R server1.domain.com

This is especially useful if you are using hashed hostnames. What are hashed hostnames, you ask? Hashed hostnames are a way to make the known_hosts file not store any identifying information on the host. So in the event of a compromise, an attacker would be able to obtain very little information from the hashed file. If you had an entry like this in your ~/.ssh/known_hosts file:

server4,10.10.10.43 ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAIEAtNuBVGgUhMchJoQiDTZ+Nu1jzJOXxG9vo5pVWSbbic4kdAMggWrdh
XBU6K3RFIEwxx9MQKR81g6F8shV7us0cc0qnBQxmlAItNRbJI8yA4Ur+2ggFPFteqUEvOhA+I7E8REcPX87
urxejWK3W11UqOXyjs7cCjoqdps8fEqBT3c=

This clearly identifies that you have at some point connected to “server4”, which has an IP address of 10.10.10.43. To make this information unidentifiable, you can hash the known_hosts file to make the above look like this instead:

|1|sPWy3K2SFjtGy0jPTGmbOuXb3js=|maUi1uexwObad7fgjp4/TnTvpMI= ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAIEAtNuBVGgUhMchJoQiDTZ+Nu1jzJOXxG9vo5pVWSbbic4kdAMggWrdh
XBU6K3RFIEwxx9MQKR81g6F8shV7us0cc0qnBQxmlAItNRbJI8yA4Ur+2ggFPFteqUEvOhA+I7E8REcPX87
urxejWK3W11UqOXyjs7cCjoqdps8fEqBT3c=

It’s the same host, but now in a format that only ssh and sshd will understand. This is where the ssh-keygen -R command is so necessary, since trying to find the entry for host “foo.domain.com” in a hashed file would be impossible. To hash your known_hosts file, use:

$ ssh-keygen -H

There is so much that can be done with OpenSSH, and this tip mostly dealt with key management. Next, we will look at some quick one-liner commands to help accomplish some basic, and some not-so-basic, tasks with OpenSSH.

Open source is not just for Linux: 14 apps great for Windows users

Recently I had a client that had a need that simply couldn’t be fulfilled with proprietary software. Well, that’s not exactly true. There were plenty of proprietary titles that could do what she needed done, but none that were at her budget.

So I did what any advocate of open source software would do–I introduced her to the world of FOSS. She was amazed that so much software existed that was not only quality, but very cost effective.

That little interaction reminded me that the biggest hurdle open source software faced was not an incompatibility, or lack of solid code–but the lack of recognition. The majority of Windows users out there believe if you want good software you have to pay for it. So I decided to highlight the open source projects out there that run on Windows so you could, in turn, help spread the word by using and promoting these tools to your fellow Windows users.

Now…on to the software.

#1 LibreOffice: This one is, with the exception of the “new name”, obvious. If you are looking for the single best replacement for MS Office, look no further than LibreOffice. Yes, it is a fork of OpenOffice, but it forked at version 3.x so it benefited from an already solid code base. This piece of software is a must-have for open source advocates. And don’t worry, although it may claim to be in “beta”, many users (including myself) are using it in production environments.

#2 Scribus: If you are looking for desktop publishing for creating marketing materials, manuals, books, fliers, etc.–look no further than Scribus. Scribus can do nearly everything its proprietary counterparts can do (such as PageMaker and QuarkXPress) only it does it with a more user-friendly interface and doesn’t require nearly the resources the competition begs for.

#3 The GIMP: Need a raster editor? The GIMP is as powerful as Photoshop and costs roughly US$700 dollars less. And if you’re unhappy with The GIMP’s current interface, hold off until around March when the new single-windowed interface will arrive. Take a look at how the new UI is evolving at the Gimp Brainstorm.

#4 Inkscape: Inkscape is to vector graphics what The GIMP is to raster graphics. Of course anyone that has worked with vector graphics knows they are not nearly as easy to work with as raster graphics, but Inkscape goes a long way to making that process as easy as it can be.

#5 GnuCash: This is the de facto standard accounting software for Linux. GnuCash is amazing in features, usability, and reliability. I have been using GnuCash for years and have yet to encounter a single problem. It does reporting, double-entry accounting, small business accounting, vendors/customers/jobs, stock/bond/mutual fund accounts, and much more.

#6 VLC: Video Lan is the multimedia player that can play nearly everything. In fact, VLC claims, “It plays everything”. I can vouch for that claim. I have yet to find a multimedia format VLC couldn’t handle. Ditch Windows Media Player, what with it’s crash-prone, resource hog behavior, and migrate to a light-weight, reliable, all-in-one multimedia player.

#7 Firefox: Another open source project that goes without saying. Firefox is quickly helping the “alternative browsers” to usurp the insecure, unreliable IE as the king of browsers. Firefox 4 should be out very soon and it promises more speed and security.

#8 Claws Mail: This is my mail client of choice. Not only is Claws Mail cross-platform, it’s also the single fastest graphical mail client available. If you want a mail client that starts up in mere seconds, has plenty of plugins, and can be configured more than any other mail client Claws Mail is your tool. Unfortunately Claws Mail can not connect to an Exchange server, but for all of your POP/IMAP accounts, this is what you need.

#9 VirtualBox: No, not everyone is working with virtual machines, but for those of you who are, make sure you give VirtualBox a go before you dive in and purchase VMWare. VirtualBox has many of the features that VMWare offers but can bring you into the world of virtual machines without the overhead cost of VMWare.

#10 TrueCrypt: This is one of those applications for the paranoid in all of us. If you need encrypted filesystems to safely hide away all of your company secrets, or just your personal information, then you need to try TrueCrypt. TrueCrypt creates a virtual encrypted disk that can be mounted and unmounted only with the configured passphrase. Without that passphrase the data within the filesystem can not be reached. Just make sure you do not forget or lose that passphrase.

#11 Calibre: With the amazing growth of ebooks (Amazon reported 2010 saw 60% of all books sold were ebooks), people need an easier way to manage their collections or convert their files/books to a readable ebook format. Calibre is one of the best tools for this job. I have four ebooks on sale at various ebook resellers (check Smashwords for me) and have used Calibre to help manage the conversion from .rtf format to a usable file.  The only format Calibre has trouble formatting to is PDF.

#12 Audacity: Anyone that needs audio editing software should take a look at this power, open source selection. Audacity will enable you to create podcasts, music, convert audio to various formats, splice files together, change pitch of files, and much more.

#13 PeaZip: Who doesn’t have to work with archives? Nearly every PC user has had to unzip a file or create an archive for emailing. Why not do this with an open source tool that can handle nearly every archiving format on the planet?

#14 ClamWin: Why wouldn’t you trust an anti-virus solution created by open source developers? You should. ClamWin is a solid antivirus solution and should soon have the real-time antivirus solution completed. If you need an antivirus solution that doesn’t drag your machine to a screeching halt during scans or insists of installing add-ons you do not want or need, give ClamWin a try.

I could go on and on with the list of open source software for Windows, but you get the idea. Open source is not just for Linux users. Users of all platforms can benefit from adopting open source titles. Not only will these software solutions save you money immediately, they will save you more and more money over time as you don’t have to pay for software support when something goes wrong–just e-mail a developer or hit the forums to find quick and available solutions.

Open source is not ideal for every situation, but you will be surprised how many times you will find an open source solution superior to its proprietary cousins.

Cloud, mobility transform software space

The maturing cloud computing model and rise of enterprise mobility are making their mark on the software industry, impacting the way independent software vendors (ISVs) and system integrators (SIs) do business and opening the doors for non-enterprise players to enter the space, note industry insiders.

According to Trent Mayberry, Accenture’s technology geographic lead for Asean, as software moves away from the shrink-wrapped sales and app distribution models, toward the software-as-a-service model, there will be changes in the way ISVs and SIs operate.

With regard to ISVs, he said: “Governance is perhaps one area that will take new forms. Processes that have been traditionally rigid will change and give way to more adaptive models to encourage the viral adoption that cloud and mobility promises.”

Mayberry also noted that enterprises are looking at significantly changing or eliminating existing business processes internally, as well as engaging with customers in new ways. As a result, SIs will have to adapt to customers’ changing needs to stay relevant, he said.

Gartner’s research director for software markets, Yanna Dharmastira, concurred. She pointed out that within the infrastructure arena, ISVs will have “broader roles and responsibilities” to play as they will need to provide existing software and services via the platform-as-a-service (PaaS) model, in a more reliable and secure way.

These vendors would then need to enhance their hosting capabilities and overall technical support services, Dharmastira said.

To better handle cloud-based requests, the analyst added that ISVs will also look to acquire smaller companies that already have SaaS offerings.

Asheesh Raina, principal research analyst for software at Gartner, noted that in the application layer, ISVs will also need to consider which server platforms they want to develop for. Their choice will determine the level of optimization and the type of drivers needed to run the apps, Raina said in an e-mail interview.

Cloud race heats up
Tan Jee Toon, country manager for IBM Singapore’s software group, said the need to ensure the availability of services–spanning across the stack from infrastructure to applications–will underscore the maturing cloud computing model as a “true example” of service-oriented architecture (SOA), an application delivery framework which was heavily touted a few years back.

“Cloud is SOA at the systems level,” Tan explained in an e-mail. “The fundamentals are the same [except] that it shifts the level of abstraction to a few levels lower down the stack. The services in the new environment are provided by cloud are now meant for infrastructure consumption [too], as compared to application consumption only in traditional models of SOA.”

Several top IT vendors are already placing their bets on the advancement of cloud computing, including Microsoft. Michael Gambier, the software giant’s Asia-Pacific general manager for server and tools, told ZDNet Asia in an e-mail that the market is “shifting increasingly toward IT-as-a-service”, and this encompasses infrastructure-as-a-service (IaaS), PaaS and SaaS.

“We continue to invest in the cloud because we see it as an enormous area for growth.

“[As we build out our capabilities], we can offer customers the full range of our offerings, whether it is an on- or off-premise environment. We’re also working with customers to help them understand what makes sense for them to move to the cloud,” Gambier said.

He revealed that Redmond last March dedicated 70 percent of its engineers to work on cloud-based offerings. This year, that figure is expected to grow to around 90 percent.

Such efforts bode well in the eyes of Tim Sheedy, senior analyst and advisor of Forrester Research’s CIO group.

In a phone interview, Sheedy identified Redmond as the frontrunner among top software vendors such as IBM, Oracle and Hewlett-Packard (HP) to embrace cloud computing. He pointed to Microsoft’s Azure and Dynamics CRM Online offerings as examples of the company’s push toward the cloud space.

“It knows that it is in trouble with Google challenging its desktop operating system and Office productivity suite, and that it has to reinvent in order to stay competitive,” the Forrester analyst said. “I expect to see deep cloud connectivity in future Windows OS-based products.”

Microsoft’s cloud rivals are not sitting idle either.

Oracle’s vice president of Fusion Middleware, Chin Ying Loong, said in his e-mail that enterprises, particularly top major companies, are evolving their current IT infrastructure to become more “cloud-like”.

These businesses, Chin said added, are looking to improve internal services to various business units in order to “provide greater agility and responsiveness to business needs, higher quality of service in terms of latency and availability, lower costs and boost utilization”.

Moving forward, he said middleware such as Oracle Fusion Middleware–a set of tools touted to help customers build flexible infrastructure for their organizations to be more agile–that can offer services securely and enable strong governance, will continue to be a key focus in 2011.

Mobility in demand
Other vendors are leveraging the cloud to deliver their services to the increasing number of devices used by the mobile workforce.

SAP, for instance, is hedging its bets on mobility and real-time analytics, which it said are technologies customers ask for today. Steve Watts, president of SAP Asia-Pacific and Japan, said the software vendor has been steering its direction toward these two arenas for the past nine months.

Enterprise mobility, in particular, is “one of the fastest growing areas” in the business space, Watts told ZDNet Asia, adding that SAP aims to have 1 billion mobile workers running on SAP software.

“Mobility will become a key enabler for our customers, from managing industrial process flows across borders to managing people locally, and it will change how organizations operate,” he said.

SAP BusinessObjects Explorer for Apple’s iPhone, for example, is a business intelligence (BI) tool that he said caters to mobile workers looking for quick, on-the-move access to business-critical information which was previously only available on stationary devices.

Paul Schroeter, strategic marketing manager of software at HP Asia-Pacific and Japan, agreed that the importance of mobility is “coming in by stealth” as workers want flexibility for IT resources.

To address this, Schroeter said HP will continue to play strongly in the application lifecycle management (ALM) space, particularly in ensuring the security of apps deployed within the enterprise.

He pointed to the acquisitions of Arcsight and Fortify, as well as the release of its ALM 2011 suite in December last year, as evidence of HP’s focus on the development, deployment and maintenance of enterprise apps.

Non-enterprise competition
According to Forrester’s Sheedy, the demand for increased enterprise mobility will open doors to non-enterprise vendors to muscle into the core industry. The analyst cited Apple and Google as prime contenders.

New software delivery models such as through native app stores and Web sites are proving to be a “disruptive force” in the enterprise space, he said, noting that mobile operating system (OS) players such as Apple with its iOS and Google with the Android–the two dominant forces in this market–have the opportunity to enter the space.

These players could also act as SIs to merge enterprise apps published on the iOS or Android platforms into companies’ backend IT systems, Sheedy suggested.

With Android-based smartphones and Apple’s iPhones and iPads increasingly becoming employees’ most-used devices to access business data, this development might open new revenue streams for the mobile platform players, he added.

“Whether they will do so is another matter, but I do see that these two companies will take revenue away from established software vendors like IBM, Microsoft and Oracle,” the analyst said. He added, though, that the incumbents will not lose major IT deals just yet, as the mobility trend is still in its infancy stages.

And it seems Google, for one, is looking to beef up its play for the enterprise space which the vendor already services via its Google Apps e-mail and productivity suite. Just last week, the Internet giant announced that it is removing the downtime clause from customers’ service level agreements (SLAs) for its Google Apps service as part of efforts to differentiate itself from competitors and entice new firms to sign up for its cloud-based services.

Security, mobile and cloud hit S’pore IT courses

In keeping with “hot” technology trends particularly in mobile, security and cloud computing, a number of schools in Singapore have introduced new courses or revamped existing curricula to groom a workforce ready for the new demands of the IT sector.

The School of Informatics and IT (IIT) at Temasek Polytechnic (TP), for example, recently rolled out a new Diploma in Digital Forensics to cater to the rising demand for IT security professionals with the skills to investigate crimes committed using computers and digital devices. The first batch of students will join the course this April.

In an e-mail interview with ZDNet Asia, course manager Mandy Mak explained that digital forensics involves the scientific analysis of evidence from sources as computers, cell phones and computer networks to prosecute those who have hacked into the computers and information systems of organizations.

The landscape of IT security, noted Mak, is ever changing. While ensuring the security of information systems remains an imperative for corporations, there is a growing need to respond to and investigate security threats and incidents due to the pervasive use of digital and mobile devices in society, she pointed out.

“The increasing concern over data breaches, fraud, insider trading and [other] crimes using digital devices has led to a need for digital forensic experts who can gather evidence and trace how a crime has been carried out,” Mak elaborated.

Over at TP’s School of Engineering, the Diploma in Infocomm & Network Engineering has been tweaked. Formerly called the Diploma in Info-Communications, course manager Yin Choon Meng said the change was to more accurately reflect the focus of the course curriculum and the competencies of the graduates, especially in the area of network and communications engineering.

Yin highlighted in an e-mail that besides the technical foundation of information, network and communications engineering, students under the program are also exposed to social media, network security and cloud computing. This is to provide students the insight into complete ICT ecosystems and hence equip them with the capabilities to flourish in the IT, networking and communications industries, he noted.

According to Yin, companies are increasingly making use of new media channels and cloud computing, which give rise to concerns about network security. In addition, the proliferation of smartphones and tablets and the introduction of Singapore’s next-generation national broadband network are giving rise to new business offerings and new ways rich media can be delivered, he said.

Academia keep watchful eye on industry
Mak, who is also the deputy director of Technology & Academic Computing, said that the IIT faculty keeps a close watch on tech trends including virtualization and “inevitably covers such topics in some of the existing subjects we teach”.

Mobile applications are also a “hot tech trend”, she noted, adding that the Diploma in Mobile & Network Services offered by the IIT, has become increasingly popular with students. She attributed this to the “interest in creating mobile apps” for platforms including the iOS and Android.

The growing trend of mobile communications also popped up as a brand new course at the Institute of Technical Education (ITE). An ITE spokesperson replied in an e-mail that the high penetration rate of smartphone users in Singapore made it “timely” for ITE to launch a new Mobile Systems and Services certification course. The program is designed to produce a “new breed of mobile systems support associates who are well-versed in mobile network infrastructure and capable of developing mobile applications”, she explained.

At the Nanyang Technological University’s School of Computer Engineering (SCE), the curriculum is reviewed and tweaked annually to incorporate what it deems as “sustained IT and industry developments as opposed to short-lived fads”, according to Professor Thambipillai Srikanthan, who chairs the SCE.

In an e-mail interview, he explained a course update can range from revising existing syllabi to keep up with new technology and industry advances such as programming languages like HTML 5, to introducing new electives.

Srikanthan said the SCE recently introduced a host of new final-year electives as part of its revamped curriculum, which include cloud computing and its related applications, augmented and virtual reality, and data analytics and mining.

Trends influence, not dictate
While tech trends do play a crucial part in the planning of IT courses, they do not dictate the entire curriculum, Srikanthan emphasized. The curriculum not only has to train graduates for current times, but more importantly, to prepare them to adapt to the rapidly changing IT technologies, he pointed out.

The SCE curriculum, for example, is carefully designed to achieve a balance between the fundamentals and technologies so that the students’ skills do not become obsolete by the time they graduate, said Srikanthan. “It is the fundamentals which will serve as a bedrock to allow the graduates to remain versatile and adapt to the evolving technology developments.”

Benjamin Cavender, principal analyst at China Market Research Group, concurred. He noted there is a need for courses and students to stay current as standards are changing extremely quickly and the development time for new technologies is shortening.

“It’s definitely important that the curriculum focusing on current and emerging trends, but it’s important that information is presented in a way that encourages students to stay current throughout their careers,” he said. “In that sense, learning how to learn becomes more important than what they learn.”

Dual-core to boost smartphone multimedia

With the arrival of dual-core smartphones, consumers can expect better multimedia experience while enterprise users stand to benefit from boosted productivity apps and video quality, say industry players.

At the Consumer Electronics Show earlier this month, phonemakers Motorola and LG announced that they will be launching dual-core smartphones this year. Motorola plans to launch two handsets–Atrix and Droid Bionic–while LG is releasing the Optimus 2X. Reports noted that other device manufacturers will likely follow suit.

In a phone interview with ZDNet Asia, T.Y. Lau, senior analyst at Canalys, highlighted that as dual-core mobile devices are not yet out in the mass market, most of the benefits of dual-core smartphones are based on what is advertised by the manufacturers. LG launched its first dual-core Optimus 2X early this week, but only in its homeland South Korea, she noted.

According to Lau, mobile manufacturers are touting better multimedia capabilities in dual-core smartphones. Video and audio quality will improve on such handsets, she said, adding that the function will be important for consumers when high-definition (HD) content is available. Consumers will also be able to enjoy smoother gameplay for console-styled games or even 3D games, she noted.

Patrick Fong, product manager of mobile communications at LG Electronics, concurred. In an e-mail, Fong said the Optimus 2X allows users to view and record videos in full HD 1080p as well as play graphically intensive games.

Canalys’ Lau noted that the boosted multimedia capabilities might be able to push the growth of video in the enterprise. Networking company Cisco Systems, she said, has been pushing the concept of “video as the next voice” and dual-core smartphones may be able to make that vision a reality.

Aside from multimedia applications, Lau noted that dual-core can bring other benefits such as faster Web browsing experience, a more responsive touchscreen, and improved multitasking. Enterprise users can also benefit from better enterprise applications such as customer relations management or business intelligence tools, said Lau.

Contrary to belief that a smartphone needs more energy to power two cores, Lau said power consumption for dual-core phones is reduced. She explained that processing workload can be shared between the two cores while a single core chip might be overloaded.

According to Qualcomm’s president for Southeast Asia and the Pacific John Stefanac, the company’s dual-CPU cores are asynchronous or able to operate at independent voltages and frequencies. This enables “finer power control and optimal performance at low power”, he explained in an e-mail interview.

Qualcomm was slated to release Snapdragon, its dual-core chip for smartphones, last year but has since indicated the launch will take place this year.

Two not for mainstream, yet
Stefanac told ZDNet Asia dual-core smartphones will be targeted initially at the high-end segment.

LG, said Fong, is labeling the Optimus 2X as a “super smartphone” and will be geared toward early adopter power users. The phone will be available in Singapore at the end of the first quarter, he added.

A Motorola spokesperson was unable to indicate in his e-mail reply when the Motorola Atrix will be available in Asia. The Droid Bionic is exclusive to Verizon Wireless in the United States.

Asked if developing apps for dual-core smartphones will be more challenging, Qualcomm’s Stefanac noted that it should be similar to single core phones. LG’s Fong agreed, adding that developers will not have to worry about the performance of graphic-rich applications on the Optimus 2X.

“We are certain that the 2X presents for developers the opportunity to put new software in the market where previously there simply just wasn’t enough processing grunt to run these programs credibly,” said Fong.

Specialized talent to top IT manpower demand

Skilled IT professionals such as software developers and project managers may now find it easier to secure a job, thanks to the burgeoning regional economy.

Job recruitment specialists ZDNet Asia spoke to pointed out that demand for this group of trained talent is back in the Asia-Pacific region, following a “slowdown” during the economic downturn two years ago.

In an e-mail interview, Lim Der Shing, CEO of online portal Jobscentral, said that network administrators, social media specialists and consultants who can strategize and influence business performance can look forward to more job opportunities. He also revealed that the site has seen a 10 percent increase of IT-related jobs being posted, as compared to same period last year.

Similarly, Thomas Kok, senior program director at the National University of Singapore (NUS), noted in an e-mail that professionals plying the risk and control trade, such as IT auditors, risk managers and business continuity managers, will be more sought after by employers in 2011.

Kiran Sudarraja, practice leader for technology at PeopleSearch, gave further insight into how IT jobs will evolve, following the amalgamation of technology and business in today’s corporate world.

According to him, 2011 is a “growth year”, with emphasis on roles requiring creativity and innovation. These may involve business process improvisation, cost reduction and improving productivity, as well as new market expansion and staying ahead of competition.

Sudarraja added that specific areas where IT skilled talent are in high demand are virtualization, cloud computing, green IT, social media, mobile payments and e-commerce. Compliance, security and support functions for financial regulations are also spheres in which trained professionals are needed.

Pay revision on the way
Jobscentral’s Lim pointed out that the strong economic rebound has given businesses a boost, with many now looking to reward their workers who took pay cuts in 2009 and 2010.

“Unemployment is low in Southeast Asia and even lower for IT professionals. As such, it will increasingly become an employee’s market for 2011. These factors will naturally lead to rising wages in the form of increments and bonuses,” he said.

PeopleSearch’s Sudarraja put the increment figures at between 4 and 5 percent, which he said is the norm.

While the market may seem transparent and fluid for now, the rise in salaries will not be even across the board, noted NUS’ Kok. He stressed that the market may still prefer those with “good, relevant experience, and those with relevant qualification and certification”.

“Talent management, especially in the IT industry, is crucial in these strong market conditions, and we will see continued upward pressure on salaries,” said Kok. “However, there will be differentiators–selected professionals may experience a larger rise in their salaries.”

This sentiment was echoed by Sudarraja. Top performers, he explained in his e-mail, are “treated well”, and their remuneration and benefits package will largely be dependent on the forecasts for the year ahead as well as the previous year’s performance.

With the greater emphasis on skilled manpower, more professionals are also signing up for upgrading courses.

The Qualified Information Security Professional (QISP) program launched in Singapore six months ago, has seen exceptional response with enrolment “exceeding expectations”, according to Gerard Tan, the president of Association of Infocomm Security Professionals (AISP), which jointly developed the course with NUS’ Institute of Systems Science.

In a phone interview, Tan revealed that 125 students had signed up for the program to date, of whom “quite a lot” are not in the security field. The course was run thrice in the fourth quarter of 2010, with two runs scheduled for the first half of this year.

Training mindset not future-proof
Sudarraja noted however that in terms of training of workforce for future challenges, Asia including Singapore is still trapped at the “resource-driven” stage, instead of the ideal “result-driven” stage. This, he added, is hindering the region’s manpower to more effectively carry out IT tasks.

“Ideally, a result-driven organization would look at the business point of view but Asia as a whole and even Singapore has yet to get there,” he commented.”While academic training can only enhance one’s knowledge, it is on-the-job training that continues to hone the skills and prepare a better workforce.”

Jobcentral’s Lim remained positive nonetheless. According to him, Singapore’s large pool of university-trained engineers and active government interest in the IT industry will be adequate to meet future challenges.

The rising number of locally-owned technology successes “shows we have the manpower and business know-how and support systems to keep up with changes in the field”, he said.

Outsourcing to slow down
Lower manpower costs has seen Asia benefitting from IT offshoring for the last two decades, but industry observers pointed out that parameters have changed making the region less appealing as an outsourcing destination. In addition, MNCs may be less likely to offshore their functions en masse.

“[The decision to outsource] will depend on a multitude of factors, including the state of economies of Europe and the U.S. where most MNCs are headquartered, and whether there are planned expansions to focus on the growing Asia-Pacific markets,” noted NUS’ Kok.

Increased wages across India, China and Southeast Asia, as well as fluctuation between the greenback, the euro and Asia-Pacific currencies have put a slight dent on outsourcing, he added.

Similarly, PeopleSearch’s Sudarraja highlighted that companies continue to looking for cost-effective locations, and jobs may eventually flow to emerging countries such as Egypt, Brazil and Africa. According to him, the Asia-Pacific region will move toward self-sufficiency and cater to the region within.

“Outsourcing will continue for now but will slow down as countries all the over the world look at inflation and jobless rates, while policy makers will try to woo other economies to invest in its  market but limit its own [entities] from outsourcing,” he noted.

Jobscentral’s Lim argued however, that MNCs that did not make big investments during the recession years are now sitting on large cash positions and will look to expand this year. “This means that plans for outsourcing of IT services will resume and the traditional outsourcing centers of India, Philippines and Singapore will benefit,” he concluded.

Key open source security benefits

Discussions of the relative security benefits of an open source development model–like comparative discussions in any realm–all too often revolve around only one factor at a time. Such discussions tend to get so caught up in their own intricacies that by their ends nobody is looking at the big picture any longer, and any value such discussions might have had has already evaporated.

When trying to engage in a truly productive exchange of ideas, it is helpful to keep in mind the fact that when something is worth doing, it is usually worth doing for more than one reason. This applies to the security benefits of an open source development model, as it does to other topics of discussion. A small number of such factors behind the security benefits of open source development are examined here:

The Many Eyes Theory
Probably the most common and obvious scratching post in online discussions of open source security is the so-called “many eyes” theory of software security. The simple version is often articulated by the statement that given enough eyeballs, all bugs are shallow. The most common retort is that open source means that more malicious eyeballs can see your bugs, too.

Of course, this counterargument is predicated upon a generally false assumption, that bugs are typically found by looking at source code. The truth is that bugs are found by mistreating software and observing how it fails, by reverse engineering it, and by a user simply going about his or her business until discovering that a program has done something like delete all of the previous hour’s work.

This theory of improved security is no true guarantee of practical security benefits, even if the most common counterarguments against it are mostly full of hot air, though. Possibly the most difficult counterargument to dismiss effectively, despite its fallacious reasoning, is the simple statement that the open source “many eyes” theory of software security does not work because it provides no guarantees. It is difficult to dismiss because it is true that no such guarantee exists. That difficulty is awfully frustrating because many people who make such arguments, and presumably many of those who listen to them, completely overlook the fact that it does not have to be a guarantee to be a benefit. All it needs is to be an increased likelihood of security, or even just increased opportunity without a counterbalancing problem.

The “Not Microsoft” Theory
Microsoft is widely recognized as a symbol of poor software security. Generations of computer users have essentially grown up experiencing the security issues that make such a reputation so well deserved. The fact that MS Windows 95, 98, and ME all failed to even do something so simple as maintain memory space separation is the kind of gross, dangerous oversight in the design of a system that can permanently tarnish a reputation. The simple fact that your software does not come from Microsoft lends it an air of at least a little legitimacy amongst some people, because while that does not prove it is secure, it at least suggests it may not share the traditional problems of MS software security.

Microsoft has launched a number of initiatives over recent years to try to rehabilitate that reputation, of course. Its successes in this area are owed to the fact that more money has been spent advertising a greater focus on security than on any actual security focusing efforts themselves, but meaningful changes have been made in the way Microsoft produces software in attempts to improve the technical security of that software in addition to the copious quantities of marketing dollars spent on apparent security. These days it is, for many people, not sufficient for purposes of making people think your software is secure to merely say, “This is not software from Microsoft.” If you want to impress people, you have to explain how it is secure, and not merely that it is not software from some vendor well-known for its past security failings.

Even so, pointing out that Microsoft was not involved in your software development process can still carry some weight with at least some readers or listeners. Microsoft is still going through some growing pains on its way toward producing more secure software, and internal conflicts between secure design and other (less technical) concerns for the commercial viability of its software offerings still present major hurdles to improving software security. Just be aware that to effectively use this argument you will probably need to be able to back it up with current, relevant explanations of the security problems that still lurk in the software development processes of this industry giant.

The Transparency Theory
Possibly the most unassailable security argument for open source software development is that of transparency. Because the source code is open, and because (especially in the case of very popular projects) many people are motivated to sift through the source code of open source software projects for a variety of reasons, that source code is likely to be seen by a great many people. Apart from the notion that bugs become “shallow” when enough eyeballs scrutinize the software, those eyeballs also provide some discouragement for those who might try to sneak malicious–or at least dubious–functionality into the design of a software system.

The most obvious and immediate counterexample is probably the OpenBSD project’s 2010 scandal over a claim that its IPsec implementation contained an FBI “backdoor”. The fact of the matter is that this claim is most likely false, whether the person making the claim knows it or not; a number of developers have set out to analyze the design of the system and find such backdoors if they exist, and come up empty-handed. Even if the claim proved true, however, it would not invalidate this theory of improved security for open source software.

The fact of the matter is that the quick announcement of the claim by the OpenBSD project founder, Theo de Raadt, illustrated the effects of open source software development as a motivator for being honest and up-front with the public about security matters. By contrast, the majority of large corporate software vendors would have been more inclined to sweep such claims under the rug and, even if they proved true, try to keep such knowledge out of the hands of users for fear it might affect sales. There is little motivation to share such issues when it might damage sales figures in cases where the closed source development process (and development employees who have signed NDAs) ensures a very low likelihood of outsiders stumbling across such vulnerabilities independently.

The Unix Theory
The Unix style of operating system (and other software) design provides substantial benefits for security over many other approaches to software design. Basic (but complete) privilege separation, modularity, and decades of testing under fire are among the many reasons Unix-like operating systems often provide greater security benefits than competing OSes.

While this argument stands up well for certain specific pieces of software or user environments, it is not universally applicable. Open source operating systems like Haiku and Plan 9 are not very Unix-like and, while they may be very well designed systems with strong security characteristics, discussing the security benefits of Unix does not address these systems’ benefits as open source software. More to the point, there are closed source Unix-like systems that offer much the same benefits. Some other open source software is also not very Unix-like, such as the Mozilla Firefox browser and the Pidgin multiprotocol IM client, both of which take a monolithic, “feature rich” approach to software design that stands in marked contrast to the Unix approach of designing programs to do one thing, do it well, and interface easily with other programs that do other things.

For those pieces of open source software that do conform to the expectations of Unix, however, this argument is alive and well, and quite valid. The extent to which tools like cat and grep have grown out of control in some implementations and drifted away from the Unix philosophy of software design is troubling to some, but the tenets of that philosophy are still visible in the basic design of these tools. Simplicity, clarity, and care in the design of software is a pleasant benefit that arises in part from such an approach to software development.

Breadth of knowledge
The important thing in considering such matters is to be aware that circumstances are more complex than a single, pithy statement about the security of open source software. Several arguments are relevant to discussions of the security benefits of open source development, including not only those listed above but others as well. Do not neglect all but one, and get yourself backed into the dead-end of a merely semantic argument relating to that one single security benefit of open source software development. Do not put all your eggs in that single basket when selecting software for your use, either. Seek out, and consider, other potential arguments, not only for discussions with others who might disagree with your analysis, but also because you need to know something about the major arguments to make an informed decision about what software to use and how to use it in the most secure manner.

Finally, do not make the mistake of making–or being taken in by–the Invulnerability Theory. Some have claimed that certain open source software, especially including Linux in general or Ubuntu Linux in particular, is impervious to security exploits of any kind. Such claims are patently false, and in fact quite obviously ridiculous. Linux is not the most secure operating system, and neither is anything else, regardless of development model.

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

How to reduce the Group Policy refresh interval

Group Policy is a great way to deploy settings to users and computers centrally–unless you wind up waiting for the updates.

The default interval to update the Group Policy to a computer account is 90 minutes, with a further offset of 0-30 minutes. While this schedule is fine for most situations, there may be times when you need to make it shorter for quick updates.

There are various ways to shorten the Group Policy refresh interval. But be careful when you make these changes because it will increase the traffic from domain controllers to computer accounts.

One approach is to have the server computer accounts receive a tighter refresh policy, with the assumption that there are fewer servers than client computers.

The refresh interval is defined in Group Policy in the Policies | Administrative Templates | System | Group Policy section in a value called Group Policy Refresh For Computers (Figure A). After the Group Policy Refresh For Computers value is selected, it is represented in minutes that will determine how frequently the computer accounts will try to update the policy.

Figure A

Another option is the offset labeled Random Time Added. The offset is important because it ensures that the domain controllers don’t get perpetually bamboozled with request for updates. Figure B has a tightened value for the update refresh interval.

Figure B

A good approach is to tighten the update interval when a number of frequent changes need to be deployed, such as after a move or a major system update. But consider whether a tighter interval is needed, especially because in most cases the updates do not retrieve a new configuration for the computer account.

On the other hand, large environments may want to make this interval much larger when thousands of computer accounts may be in use.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

BPO to take Philippines to higher ground

Having toppled outsourcing giant India in the call center market last year, the Philippine ICT industry is aiming to level up further this year as the government and the private sector team up to set ambitious revenue goals and draft long-term programs.

According to analyst firm XMG Global, 2011 should be a positive year for the country’s IT market with overall IT spending estimated to grow 11 percent to US$3 billion.

The BPO (business process outsourcing) industry, one of the country’s main revenue earners, is again leading the charge. Industry group Business Processing Association of the Philippines (BPAP) is targeting to hit US$11 billion in revenues in 2011, a 20 percent increase from estimated US$9 billion in 2010.

The Philippines last year dethroned India as the global call center hub, hitting US$5.7 billion in revenues against India’s US$5.5 billion and employing more call center workers than the former leader.

BPAP and the Philippine Commission on ICT (CICT) are projecting that the industry could create an additional 84,000 jobs this year, bringing the total number of IT-BPO workers in the country to 610,000.

The figures tally with XMG’s estimated forecast which revealed that the growth of talent employed in offshore services for 2011 will reach 651,425. “One of 12 employed professionals in Metro Manila will be working either in BPO, call center or IT services,” said Phil Hall, principal analyst at the research firm.

BPAP’s chief, Oscar Sañez, said in an earlier statement that aggressive marketing, both locally and internationally, will be key for the Philippines to achieve the US$11 billion-revenue goal. “We have to increase the awareness of our potential employees of job opportunities in IT and BPO companies, including those in the knowledge process outsourcing and other non-voice sectors. We also have to improve our visibility internationally to market new services in new territories,” he said.

Sañez also noted that President Benigno “Noynoy” Aquino III in December 2010 had pledged to allocate 62 million pesos (US$1.4 million) as “BPO promotions fund”, adding that the amount would help the industry achieve this year’s revenue goal.

Focus to include broadband, digital TV
For the CICT, the government agency in charge of the local ICT market, boosting the BPO sector is just one part of the “digital strategy” which spans 2011 to 2016.

Ivan John Uy, who heads the agency, told ZDNet Asia in an interview that the five-year plan–which will be launched soon–aims to enhance the country’s software, telecoms, e-government and postal sectors.

This year, the CICT is coordinating with various academic and non-formal education institutions to “re-tool” jobless college graduates and youths, Uy said. “For instance, our nursing graduates who don’t have work yet can be trained to become medical transcriptionists and healthcare support specialists,” he said.

He cautioned that the country has to quickly replenish or augment its BPO manpower base. “We run the risk of [skills] shortage. We have lots of jobless people around but they don’t have the skills,” Uy said, adding that the government is also looking at the possibility of offering a “study now, pay later” scheme for the unemployed.

The CICT this year will also be preparing for the country’s migration to digital TV, he said, elaborating on plans for the local telecoms sector.

Last year, the National Telecommunications Commission (NTC)–which operates under the CICT–selected a Japanese digital TV standard which broadcast companies must adopt by 2015.

Uy explained: “As part of our preparations, I’ve directed the NTC to organize a technical working group to draft, within the year, implementation rules and regulations (IRR) which would also serve as a guideline for broadcast companies.”

Touching on the private telecoms sector, XMG said increased competition will force mobile operators to roll out better pricing, especially with regard to data plans and long-distance charges.

“[Leading] service providers will be those that can leverage their wireless and extended bandwidth capabilities,” noted Hall, who is based in Manila. “Price, services and local content provisioning will be the dominant lure and battleground, as social networking continues to grow dramatically and cuts into the SMS market.”

In the broadband space, the XMG analyst said subscriber base growth will be propelled by intense competition which will push down the prices of entry-level packages.

“Competition is increasing between fixed-line, cable and mobile providers,” he said. “Among telco giants PLDT Smart and Globe Broadband, subscribers will continually lag behind cellular subscription. However, broadband will continue to remain [these operators’] growth area [and see] double-digit growth, making it an important revenue stream for all carriers.”

Given the increasing demographics of Filipinos who are clamoring for better quality of service and pricing both at home and on-the-move, consumers are unlikely to stick to a single provider when buying broadband services, he noted.

“[To lead the market], service providers will need to develop loyalty programs and provide attractive pricing and bundling schemes,” Hall said. “Internet TV will not take traction in the Philippines yet, but we foresee tie-ups between TV content providers and Internet for on-demand replay of shows. Watch for PLDT Group’s TV5 as they strategically evolve to become the natural fit to take on this leadership role.”

An interoperable government
Turning to e-government initiatives, Uy said the CICT will be pushing for interconnectivity and interoperability between IT systems deployed across different government agencies.

“Each agency has its own GIS (government information system) and data center which do not talk to each other. ICT adoption in the government is extremely low and fragmented,” he revealed.

XMG, though, is not expecting any major leaps in this area this year. Hall said: “However, if the new Aquino government follows its stated plan, we anticipate a slow progression from more use of IT in government departments to true e-government applications during this presidential term.”

With regard to the country’s postal service, which falls under the domain of the CICT, Uy said reforms are underway to transform post offices across the Philippines into self-sustaining community e-centers.

He noted that the Philippine Postal incurred losses totaling 300 million pesos (US$6.8 million) last year. “We need to fix this and install a new business model.”

Beyond the government sector, XMG said the Philippines can expect to see IT developments in other areas including social networking, consumer electronics, green IT, cloud computing and software development.

According to Hall, social networking activities in the country will see continued growth through 2011 and beyond. “Facebook and Twitter are taking market [share] from SMS,” he said. “Like e-mail and the mobile phone before, these are culture-changing products and we have not seen their full potential yet.”

“Expect more developments for use of social networking in business, but also expect higher levels of advertising, spam or its equivalent, viruses and other intrusions,” he added.

XMG also expects tablets to claim its ascendancy in the gadget race.

“Most major manufacturers are due to release their first models in first-quarter 2011, while Apple is due to announce iPad 2,” Hall said. “With the rise of the middle-class and tech-savvy Gen X and Gen Y Filipinos, expect to see these gadgets in local coffee shop. With a wide range of devices and operating systems, there will be no leader but expect Apple to remain strong, followed by Samsung and RIM.”

Elaborating on cloud adoption, the XMG analyst said IT vendors are expected to grow their enterprise offerings through the public cloud. “However, we do not anticipate well-established companies with significant investment in IT to [migrate] their ERP systems or legacy applications just yet in 2011,” Hall noted.

He said enterprises will need new software that built to be deployed on the cloud as legacy systems are not designed for such implementation.

He also pointed to green IT as a growth area for the Philippines as high utility costs in the country make a good case for the deployment of energy-efficient hardware and virtualized servers.

“The adoption of green IT practices will increase, albeit slowly, over the next 12 months primarily due to newer hardware refreshes,” Hall said. “Unlike other green-conscious economies such as Singapore and Korea, businesses and industries in the Philippines must still collectively make a commitment to saving the environment and reducing carbon emission footprint generated by technology.”

Melvin G. Calimag is a freelance IT writer based in the Philippines.

India 2011 basks in ‘solidification’ of 2010

India will see major IT trends from 2010 such as green IT and cloud computing continue to gather momentum amid optimism that the 2011 will bring new innovation and growth, industry players say.

“We are optimistic about 2011,” Sudip Nandy, CEO of communications technology vendor Aricent, told ZDNet Asia in an e-mail. As the macroeconomic environment further improves, Nandy expressed hopes to see significant spending on innovation and new applications of technology.

Ananadan Jayaraman, chief product and marketing officer at Connectiva Systems, concurred. “It will be a year of rapid growth for the business with significant activity in emerging markets, particularly, India, Southeast Asia, Eastern Europe and Latin America.”

“We believe the U.S. market will continue to be soft and will take longer to return to robust growth,” Jayaraman added. He noted that Connectiva, a revenue management software vendor, expects customers to be increasingly demanding and to expect vendors to take full responsibility for business outcomes and work with them on risk-reward models.

Surajit Sen, director of channels, marketing and alliances for NetApp India, said: “We’ll see the same economic conditions and the same major IT themes in 2011. It will be a year of solidification and increased adoption of some key trends that began in 2009 to 2010.”

For instance, Sen noted, most, if not all, companies would have adopted a “virtualize first” policy for new applications.

Green IT is also likely to gather momentum with businesses in India continuing to energy-efficient technologies to reduce costs and provide various environmental benefits.

“This trend will grow further in 2011, alongside increased use of business efficiency solutions and asset and infrastructure consolidation,” said Vipin Tuteja, executive director of marketing and international business, Xerox India. He also expects businesses to develop more collaborative work environments which seek to optimize the use of cloud.

Sen added: “There will be even more talk about cloud IT services, though, buyers are still cautious. There will be a lot of talk about hybrid clouds.”

He noted that over the last couple of years, Indian IT companies also have begun to explore opportunities in markets such as Mexico, Ireland, Netherlands, Philippines and Brazil. This trend will continue in 2011 as companies continue to diversify their business from core markets such as the United States and United Kingdom.

According to Dun & Bradstreet (D&B), service providers are expected to sharpen their focus on India’s domestic market to tap imminent growth opportunities offered by the country’s booming economy.

“The rapid growth in the domestic market is likely to be driven by major government initiatives such as increased spending on e-government and increased thrust on technology adoption, and upgrades across various government departments to bridge the digital divide,” the D&B statement said.

The business research firm noted that the Indian IT-BPO (business process outsourcing) industry is expected to adopt the inorganic growth route in order to widen its service offerings and enter new geographical markets.

It added that several third-party and captive BPO units are likely to acquire small companies to ramp up revenue, acquire clients and expand business segments and geographical reach. “Consolidation will also be driven by international M&A (merger and acquisition) deals, propelled by robustness of the Indian players,” it said.

Growth driven by 3G, BWA
According to research firm IDC, the launch of 3G and BWA services is expected to boost the demand for more gadgets across India.

Sumanta Mukherjee, lead PC analyst at IDC India, said the PC market this year will be redefined by the introduction of 3G services and service bundling with existing and new PC form-factors, increase in functionalities in mini-noteboook PCs and wider adoption of IT in the education sector.

Aricent’s Nandy said: “I am very upbeat about communications since operators are rapidly adopting technologies like LTE, and devices and application vendors are constantly competing to deliver compelling user experiences to the consumers.”

Jayaraman also pointed to telecommunications, media and entertainment to provide continued growth, driven by innovation in mobile, tablets and on-demand video. “Utilities is another segment where we expect to see increased IT investments driven by smart grids,” he said. “We also expect banking and insurance sectors to come back very strongly this year.

End of tax holiday may hit firms
However, the uncertainty over whether tax holiday will be extended after March 2011 could slow down expansion plans of several Indian IT companies.

D&B said: “Large companies would be able to alleviate the tax burden arising from the expiry of tax holiday by moving into SEZs (special economic zones). However, small companies, which form the bulk of the companies registered with STPI (Software Technology Parks of India, will find it hard to survive as they are still struggling post-global recession and do not have the financial resources to face this challenge.”

Swati Prasad is a freelance IT writer based in India.

Malaysia looks to higher ICT spend

Malaysia’s ICT spending is expected to rise this year and growth will be driven by several emerging technological trends, say industry watchers.

In its annual ICT predictions, research firm IDC noted that IT spending in the country will grow by 9 percent from US$5.9 billion in 2010 to US$6.5 billion this year. Spending in the telecommunication sector is expected to hit US$7.3 billion in 2011, up 5.3 percent from 2010.

Roger Ling, research manager for IDC Asean, said more changes are expected in the local ICT market that will drive this year. The total IT spending for Malaysia, driven mainly by purchases of hardware and packaged software, grew 6 percent in 2010, he added.

Ling said: “The growth in IT spending in 2011 is expected to be driven by factors such as the government’s continued efforts to increase the level of broadband penetration, and outsourcing initiatives by organizations looking to address the increased IT complexity.” Other factors include the continued adoption of system infrastructure software to operate and manage computing resources, he added.

In its annual prediction, Frost & Sullivan pointed to wireless broadband and cloud computing as two growth areas in the local ICT sector.

“The wireless broadband subscriber base overtook its fixed counter in 2010 and we expect this trend to accentuate leading to, among other things, the increased demand for smartphones and more competition among wireless players,” said Nitin Bhat, Asia-Pacific partner and vice president for ICT Practice, Frost & Sullivan.

In an e-mail interview, he noted that cloud will gain significant traction this year, driven by the “twin factors of supply-side maturity and demand-side understanding”. “We see a high propensity of trials, and some transactional-based cloud computing adoption among enterprises in Malaysia,” Bhat said.

Talking cloud
According to Ananth Lazarus, managing director of Microsoft Malaysia, IT investments in both the private and public sector will shift toward the cloud this year, driven by two key factors.

The first, he said, is business needs. Second, Lazarus said the government’s transformation programs will see key projects taking off this year.

“The promises of the cloud are applicable [to these programs]. Reducing costs, providing flexibility and agility in how organizations use their IT resources, ease of adoption and implementation, and not least, allowing organizations to explore and develop innovative services with a low cost of entry is what the cloud can do,” he said in an e-mail interview.

Customers will also start to explore the tradeoffs between private and public cloud offerings. Large enterprises that have been testing the waters will begin more earnest deployments and will aggressively look at building their own private clouds, he noted.

Early adopters will serve as proof-points and best practices will encourage cloud adoption among small and midsize, he said, adding that government agencies would initiate discussions on key issues such as data sovereignty and public policy.

Johnson Khoo, managing director of Hitachi Data System (HDS), noted that businesses this year, in particular, will start looking at new investments in IT infrastructure and services, such as data centers, while continuing to focus on keeping costs low and maximizing their existing IT investments.

In an e-mail interview, Khoo noted that with the announcement under the government’s Economic Transformation Program (ETP), Malaysia is seeking to be a world-class hub for data centers in the region. The ETP is designed to boost the country into a high-income nation by doubling its per capita income to US$15,000 by 2020. A bulk of the program involves infrastructure-driven projects such as the Mass Rapid Transport system due to kick off in July.

He said HDS expects to see growing interest in data center infrastructure and related services such as co-location and Web-hosting, managed networks, disaster recovery and other outsourcing services.

Skilled workers needed
Khoo, however, cautioned that Malaysia still lacked a skilled and knowledgeable workforce to complement these infrastructure investments. He noted that the country faces a shortage in human capital with skills that are particularly crucial for the ICT industry.

“Malaysia is globally recognized as a profitable regional hub for shared-services activities,” he said. “It is vital that both the government and [industry] intensify efforts to address this to remain competitive against our neighbors in the region.”

Yuri Wahab, country general manager for Dell, concurred. “More initiatives such as the newly established Talent Corporation aimed at attracting human capital, including Malaysians working overseas, are vital to ensure the nation’s talent pool grows and that our knowledge workers contribute positively toward the development of the country,” Wahab said.

He also expressed enthusiasm for Malaysia’s ICT industry, pointing to the rollout of the country’s high-speed broadband initiative. He added that it will promote greater digital inclusion, which is a key contributor to economic growth.

“This would allow more Malaysians and local entrepreneurs to connect to and participate in an increasingly global and borderless economy… We believe that this will also drive ICT consumption in the country,” he said.

Edwin Yapp is a freelance IT writer based in Malaysia.

Tweak your Ubuntu with Ubuntu Tweak

Do you remember those days when every Windows user worth their salt installed TweakUI, in order to get as much tweaking and configuring as they could out of their PC? That tool really did a lot for the Windows OS and, believe it or not, there is a similar tool for Ubuntu. That tool? Ubuntu Tweak.

Ubuntu Tweak allows you to dig into configurations you may not have even known about…and do so with ease. That’s right, there’s very little “magic” or obfuscation involved with this tool…it’s just straight-up configuration options that might have otherwise been hidden (or at least not as easy to find). With Ubuntu Tweak you can:

  • Update your system.
  • Add sources for packages.
  • Change startup settings.
  • Configure numerous hidden desktop settings (including desktop backup and recovery).
  • Set up default folder locations.
  • Manage scripts and shortcuts.
  • Gather system information.
  • Manage file types and Nautilus settings.
  • Configure power manager settings.
  • Manage security settings.

So, how does it work? How is it installed? Let’s take a look.

Installation
You won’t find Ubuntu Tweak in the Ubuntu Software Center. Instead you need to download the .deb package and install it manually (or let your browser open up the USC for the installation). I prefer the manual method, so that is what I will demonstrate.

Download the most recent .deb package from the Ubuntu Tweak main page. Once you have that file downloaded, follow these steps:

  1. Open up a terminal window.
  2. Change into the directory holding the newly downloaded .deb file.
  3. Issue the command sudo dpkg -i install ubuntu-tweak-XXX.deb Where XXX is the release number.
  4. Type your sudo password and hit Enter.
  5. Allow the package to install and then, when it is finished, close the terminal window.

Usage
To start up Ubuntu Tweak click on Applications | System Tools | Ubuntu Tweak. When you first start up the tool, it will give you a warning that you should enable the Ubuntu Tweak stable repository. Click OK to do this. Once that warning is out of the way, you can dig into the tweaking of your Ubuntu OS.

Figure A
Figure 1

Click image to enlarge.

The interface for Ubuntu Tweak is very well done (see Figure A). As you can see, the left pane is broken into categories: Applications, Startup, Desktop, Personal, and System. Some of these tweaks will require the use of sudo and some will not (depending on the nature of the configuration).

One very handy configuration in the Personal section is Templates. Here you can drag and drop files into the main window and those files will then be added as document templates.

From an admin standpoint, a very handy option is the Login Settings in the Startup section. In this section you can configure:

  • Disable user list in GDM.
  • Play sound at login.
  • Disable showing the restart button.
  • Login theme.

Obviously not every option is a gem, but the ability to hide the user list as well as disabling the restart button in the login screen can be very handy.

Finally you will want to take a look at File Type Manager in the System section. This allows you to manage all registered file types on your system.

I have only scratched the surface of Ubuntu Tweak–it really is an incredibly powerful and handy tool that any and all Ubuntu users/administrators should get to know. From this single window you have the ability to configure/administrate many items from the System menu.

The art of the small test program

It’s happened again. No matter how carefully you’ve tested each capability of the language, the library, or the framework you use. No matter how religiously you’ve built unit tests for each component. When you finally brought it all together into an application masterpiece, you get a failure you do not understand.

You try every debugging technique you know; you rewrite and simplify the most suspect passages; you stub out or eliminate entire components. Perhaps this helps you narrow the failure down to a particular region, but you still have no idea what’s going wrong or why. If you have the sources to the language or the library, you may get a lot further than if they’re proprietary, but perhaps you still lack the knowledge or the documentation to be able to make enough sense of the failure to solve the problem.

It’s time to get some help. You post questions on the fora, or you contact the author/vendor directly, but they can’t reproduce the problem. (Somehow, you knew that would happen.) They want you to send them a reproducing case. You direct them to your entire application, and the problem never gets resolved, because it’s just too much trouble. The end.

Okay, we don’t like that ending. How can we rewrite it? In the case of paid support we can stomp, yell, and escalate to force the vendor to spend time on the problem; but if it turns out to be too difficult to get the entire app running and debuggable, then they can still plead “unreproducible”. There is only so much that a vendor can do. Even if they stay on the problem, it could take a very long time to get to the bottom of it. Fortunately, there’s something we can do to help the vendor help us: It’s called the Small Test Program (STP).

“Whoa! Wait a minute! We already removed everything extraneous when we were attempting to debug this!” I hear you cry.

That may be true, but our goal then was to rule out other causes. You can almost always do more by shifting the goal to reducing the footprint of the test case. The two goals sound almost the same, and they overlap a lot, but they don’t cover entirely all the same ground. In the first case, we were trying to do everything we could to help ourselves to solve the problem. In the second, we want to do everything we can to help the developer to solve the problem. That means we need to take the following steps:

  • Remove reliance on a specific configuration. No doubt you’ve customized your development environment with all sorts of shortcuts and conventions to save yourself time; every one of those costs time, though, for someone who isn’t familiar with them. You either need to remove those dependencies and create a more vanilla example, or provide an instant setup for them that won’t be invasive. For instance, if you need the user to set certain environment variables, provide a script that does that and then launches the app. Preferably, eliminate the dependency on environment variables altogether — dependencies can add to the confusion by being set in more than one place, or not getting exported properly.
  • Eliminate all custom or third-party components that you can. You should have already done this, but it becomes even more important when submitting a failure. External components attract the finger of blame — as they should, because they often cause unforeseen problems. Rule them out. Furthermore, if the external components require installation and setup, that delays the developer from being able to look at the problem. Developers often have trouble getting these components to work on their system, which is all wasted time if they didn’t really need them to begin with.
  • Reduce the number of user steps required. If you think that one or two runs through the test case will reveal the problem, then your name must be Pollyanna. If they have to run your test a thousand times, every minute of elapsed execution time costs two work days. It’s actually more than that because people are human–every time the developers have to restart a long, arduous set of steps, they need a pause to sigh and wonder where they went wrong in life.
  • Clearly document the steps required. I don’t know how many times I’ve received something claiming to be the steps to reproduce a problem that reads “Run the app.” Unless the app is so simple that it requires no setup or interaction, and the failure is so obvious that not even [insert archetypal clueless person here] could miss it, this instruction will fail to reproduce. No matter how apparent it may seem, include every step–every setup command, the command to launch the app, and every input required. If you followed the previous steps, this shouldn’t be much.
  • Reduce the number of lines of code executed as much as possible. Maybe the entire program runs in two seconds, but if it executes 30,000 lines of code, then that’s at least 30,000 possible causes that the developer may have to rule out. Furthermore, it complicates debugging. If you can get the entire program down to “step, step, kaboom!” then you’re gold.
  • Include clear indications of failure. Don’t presume that the developer will recognize immediately that your Weenie Widget is 10 pixels too short — tell them so in the steps. Ideally, the application should scream out “Here’s where I’m failing!” when it’s run. Use assertions, or at least a printf or message box.
  • Include clear indications of success. How many times have I solved a problem presented by a test program, only to run into another failure immediately afterward? Did I fix a problem that they weren’t reporting, and now I’m seeing the one they meant? Usually, they know about the second one, but they just didn’t bother to prevent it since they had reproduced a failure with the first one. This is bad form. Ideally, you want your test program to be tailor-made for inclusion in a test suite so the same problem doesn’t get reintroduced. For that to happen, it needs to cross the finish line with flying colors. Let there be no doubt that it was successful.
  • Test your test. Run through the test as if you were the developer assigned to work on it to make sure you didn’t forget anything. Don’t run it on your development system, because your environment might be set up in a way that the developer’s isn’t. Use a virtual machine with a vanilla configuration to run the test and make sure it fails in exactly the way you intended. It could save you a few email round trips and avoid giving the impression that you don’t know what you’re doing.

Why you should create an STP
Why should you put the extra effort into creating an STP? It’s their bug, after all. Let them find it and fix it.

Most of my clients are software developers, so I’ve looked at this issue from both sides. I’ve been the recipient of hundreds (perhaps thousands) of failures to solve over the last 20 years, and I’ve had to submit my share of them to numerous software providers. I can tell you from my experiences that more than anything else–more than whether you pay the vendor to support the product or how much, more than all the screaming and yelling you can muster, more than the all the flattery you can lay on them, more than any reputation they may have for responding in a timely manner–the single most influential factor in determining how quickly the developers will resolve your problem is how clearly and concisely you’ve demonstrated the failure.

So, the next time you need to submit a problem report, remember the immortal words of Steve Martin: “Let’s get small.”

Chip Camden has been programming since 1978, and he’s still not done. An independent consultant since 1991, Chip specializes in software development tools, languages, and migration to new technology. Besides writing for TechRepublic’s IT Consultant blog, he also contributes to [Geeks Are Sexy] Technology News and his two personal blogs, Chip’s Quips and Chip’s Tips for Developers.

Data Protection Manager 2010 migration successes and challenges

In a September 2010 TechRepublic article, I discussed Westminster College’s migration from Symantec’s Backup Exec to Microsoft’s Data Protection Manager (DPM) 2010 and outlined our reasons for making the switch.

We were facing four challenges:

  • Backup Exec licensing. We had been using Backup Exec for quite some time and needed to deploy additional servers and services and be able to protect some new workloads, including Exchange 2010 and SharePoint 2010 data. We were out of licenses to protect these workloads and would have needed to upgrade the existing software as well.
  • Challenged backup window. Our backup window was starting to get a bit tight.
  • Lack of continuous protection. We were using a very traditional backup operation that relied on full backups on weekends and differential backups once per day throughout the week. This left significant opportunity for data loss in between backups.
  • Recovery time. When recovery operations needed to be performed, they could be monotonous, time-consuming tasks because we were still fully reliant on tape as our primary backup storage vehicle.

Since September 2010, we made significant progress in migrating our backup operations from Backup Exec to DPM 2010, although we still have a few workloads that reside on Backup Exec. Here’s an update on our migration progress, in which I share some successes we’ve had, challenges we’ve identified and new opportunities that have arisen to improve our backup and recovery capability.

Successes
All of our critical workloads are being well protected under DPM 2010, including all of our enterprise, mission-critical database applications, Exchange 2007 and 2010, SharePoint 2010, and our file services.

I’m incredibly impressed by DPM, but I would probably feel the same way about just about any disk-based backup and recovery tool due simply to the sheer speed of recovery. Several weeks ago, we had a need to restore a backup from the previous evening of our ERP database, but we needed to restore it with a different name so that it could be modified by our ERP vendor for an implementation project that we have underway. Previously, this kind of activity would have taken an hour or two; however, we decided to give it a go with DPM.

Between the time it took to stage the recovery and actually restore that database to a new name and location, we had invested a grand total of less than 10 minutes–for a 28 GB database.

My staff and I also learned that, although DPM doesn’t come right out and say that you can rename a database during a restore, you can easily do so by telling DPM to restore the database to an alternate SQL instance and then simply choose the original instance, provide the new database name, and tell DPM to what location in the file system the databases should be restored.

Our ERP vendor was pretty surprised when we emailed them less than 15 minutes after receiving their initial request for this “play” database letting them know that their request had been completed. In the long term, this kind of turnaround time is good for us, too. Recovery time is surprisingly fast with DPM. Of course, we’re recovering from disk over a 1 Gb Ethernet network in this example, so it should be faster than our previous tape-based recovery operations.

We’re protecting mission-critical workloads much more often that we’ve ever been able to in the past. For example, we have our database applications updating the DPM replica every 15 minutes to hour, depending on workload.

Challenges
The primary challenge that we still face is protection of our SharePoint 2007-based workloads; this is the last item still being protected by Backup Exec. The only limiting factor has been troubleshooting time, which we will get over the next couple of weeks. In the meantime, we’ve redirected Backup Exec-based protection to a disk-based virtual tape library.  From there, we protect the Backup Exec data with DPM so that we’re continuing to provide maximum protection to all data.

Another challenge is that we, unfortunately, have some Windows 2000-based services still in production that we had to find ways to protect.  We’ve been able to work around DPM’s inability to directly protect Windows 2000 machines by scheduling local backups and then simply handling those backups as file objects on other servers. We’re working hard to get away from these Windows 2000 services.

More about our future plans We house our backup systems outside our data center in another campus location that is, for all intents and purposes, underground. The location is not ideal from an accessibility standpoint, so we’ve been exploring other options. We could host backups completely off campus–and we will be doing so at some point–but as our primary backup mechanism, I don’t believe in hosting the service anywhere near the data center.

As the college has been working on new construction, we’ve worked with our developer to create what I believe is a perfect solution for the backup hosting challenge. In one of our new buildings (it’s about as far away from the data center as you can get and still stay on the campus network) the developers will be constructing in the basement a concrete bunker with 12-inch thick concrete walls and a concrete ceiling.  They will also be installing a 3 hour rated fire door and standalone cooling for us.  This room will be situated in the building so that it is as far underground as possible. In fact, on the other side of the outside wall will be nothing but earth.

Summary
The more I use DPM, the more satisfied I am with the product and the decision to move to it.  It has proven to be very fast, easy to manage, and robust. Overall, it has been a great addition to our backup and recovery arsenal.

Change a slide’s orientation in PowerPoint

Microsoft PowerPoint


Change a slide’s orientation in PowerPoint

You know that you can use portrait or landscape orientation in Word and Excel documents. What you might not know is that you can apply the same orientation setting to PowerPoint slides. Similar to pages in a document or report, you can change the orientation setting from slide to slide.

By default, slides are landscape. Choosing to change that default should be part of your design process. Switching from landscape to portrait, after the fact, will seldom produce results you’ll want to use.

To set a slide’s orientation, do the following:

2003 2007/2010
  1. From the File menu, choose Page Setup.
  2. In the resulting Page Setup dialog, check Portrait or Landscape in the Slides section or the Notes, Handouts & Outline section.
  1. Click the Design tab.
  2. Click the Slide Orientation dropdown in the Page Setup group and choose an option.

This tip won’t wow them at the newsgroups; it falls into the I didn’t know you could do that category. If you change a slide’s orientation, be sure to test it in as many environments as possible–it might look good on your development system but look squashed on another, especially a laptop.

Microsoft Word


How to remove the spacing between paragraphs

Word adds space between paragraphs–whether you want it to or not. If you display paragraph marks, you’ll not find any extra paragraph marks. This behavior is part of Word’s styling. When you press Enter to create a new paragraph, Word increases the line spacing to mark the change from one paragraph to another.

You can’t change the spacing between paragraphs using Backspace–the key you might press first, just from habit. Doing so will just create one big paragraph. Fortunately, you can change the spacing and Word is flexible enough to allow you to change the spacing for one paragraph, several paragraphs, or all paragraphs.

To change spacing between just two paragraphs, choose the paragraph below the space you want to remove and press [Ctrl]+0. If the first combination adds a bit more space, press [Ctrl]+0 a second time to remove the extra space.

You can remove the spacing between all paragraphs, as follows:

  1. Click Home | Paragraph dialog launcher (the small arrow in the lower right corner). In Word 2003, select Paragraph from the Format menu and click the Indents and Spacing tab.
  2. Check the Don’t Add Space Between Paragraphs Of The Same Style option.
  3. Click OK.

The change will be apparent in any new content, it will not affect existing content. To remove the space between existing paragraphs, you must select the text first. In addition, if you copy several paragraphs that contain spacing, that spacing will remain intact.

When this option enabled, you can’t use the Spacing option in the Paragraph group on the Page Layout tab. You must select the paragraphs and uncheck the Don’t Add Space… option first.

One last thing–this property affects only the current document. If you want to set this as a default property, click the Set As Default button in the Paragraph dialog box.

Microsoft Excel


Use conditional formatting to format even and odd rows

Many users like to shade every other row to make sheets more readable, especially when there’s lots of data. Sometimes restrictions can complicate things, or at least you might think so initially. For instance, you might think that shading only odd or even rows a harder task than shading every other row, but you’d be wrong. Using conditional formatting, formatting only odd or even rows is simple:

  • To format even rows only, use the conditional formula =EVEN(ROW())=ROW().
  • To format odd rows only, use the conditional formula = ODD(ROW())=ROW().

Now, let’s work through a quick example:

  1. Select the rows you want to format. To select the entire sheet, click the Sheet Selector (the gray cell that intersects the row and column headers).
  2. Click the Home tab.
  3. Click the Conditional Formatting dropdown in the Styles group and choose New Rule.
  4. From the Select A Rule Type list, choose Use A Formula To Determine Which Cells To Format.
  5. In the Format Values Where This Formula Is True field, enter =EVEN(ROW())=ROW().
  6. Click Format.
  7. Specify any of the available formats. For instance, to shade all even rows red, click the Fill tab, click Red, and click OK twice.

Notice that the even rows are now red. To shade odd rows, repeat the above steps. In step 4, enter the formula = ODD(ROW())=ROW(). In step 6, choose a contrasting color, such as green. This technique isn’t just for shading, it’s for formatting in general.

Okay, that’s hideous, but it makes the point well–with little effort, you can format all even or odd rows. Please don’t ever do this to a real sheet unless you’re pranking someone!

7 overlooked network security threats for 2011

No one working in network security can complain that the issue has been ignored by the press. Between Stuxnet, WikiLeaks server attacks and counterattacks, and the steady march of security updates from Microsoft and Adobe, the topic is being discussed everywhere.

IT workers who have discovered that consolidation, off-shoring, and cloud computing have reduced job opportunities may be tempted to take heart in comments such as Tom Silver’s (senior vice president for Dice.com) claim that “there is not a single job position within security that is not in demand today”. This and similar pronouncements by others paint a rosy picture of bottomless security staff funding, pleasant games of network attack chess, and a bevy of state-of-the-art security gadgets to address threats. Maybe.

In these challenging times, separating hype from visionary insight may be a tall order. Yet it’s important to strike a sensible balance, because there are problems both with underestimating the problem as well as in overhyping the value of solutions. This situation became readily apparent when making a list of overlooked threats for the upcoming year. The task of sorting through the hype must not become a cause that only managers will be inspired to take up.

Table A summarizes a modest list of security threats that are likely to be overlooked in the coming year. The list thus adds to the mélange of worry-mongering, but at least the scenarios are plainly labeled as worst case scenarios.

Threat Area Worst Case Scenarios
1. Insider Threat Enterprise data including backups destroyed, valuable secrets lost, and users locked out of systems for days or even weeks.
2. Tool Bloat Backlash Decision-makers become fed up with endless requests for security products and put a freeze on any further security tools.
3. Mobile Device Security A key user’s phone containing a password management application is lost. The application itself is not password-protected.
4. Low Tech Threats A sandbox containing a company’s plan for its next generation of cell phone chips is inadvertently exposed to the public Internet.
5. Risk Management A firm dedicates considerable resources to successfully defend its brochure-like, e-commerce-less web site from attack, but allows malware to creep into the software of its medical device product.
6. SLA Litigation Although the network administrator expressed reservations, a major customer was promised an unattainable service level for streaming content. The customer has defected to the competition and filed a lawsuit.
7. Treacheries of Scale A firm moves from a decentralized server model to a private cloud. When the cloud’s server farm goes offline, all users are affected instead of users in a single region.

Table A. Worst Case Scenarios for Overlooked Network Security Threats

1. Insider threat
Millions of dollars can be spent on perimeter defenses, but a single employee or contractor with sufficient motivation can easily defeat those defenses. With sufficient guile, such an employee could cover his tracks for months or years. Firms such as Symantec Vontu have taken a further step and characterized the insider threat issue as “Data Loss Prevention” (DLP). Also in this category are attacks on intellectual property, which tend to be overlooked in favor of more publicized losses.

2. Tool bloat backlash
Recent TSA changes to airport security demonstrate that the public’s appetite for security measures has limits. The same is true for network security. As demands for more and more tools taking an increasingly larger percent of the IT budget mount, backlash is inevitable. Many tools contribute to a flood of false positives and may never resist an actual attack. There is a network security equivalent of being overinsured.

3. Mobile device security
There’s lots of talk about mobile device security, but despite prominent breaches employing wireless vectors, many enterprises haven’t taken necessary precautions.

4. Low-tech threats
Addressing exotic threats is glamorous and challenging. Meeting ordinary, well-understood threats, no matter how widespread, is less interesting and is thus more likely to be overlooked. Sandboxes, “test subnets,” and “test databases” all receive second class attention where security is concerned. Files synchronized to mobile devices, copied to USB sticks, theft of stored credentials, and simple bonehead user behaviors (“Don’t click on that!”) all fit comfortably into this category. Network administrators are unlikely to address low tech threats because more challenging tasks compete for their attention.

5. Risk management
Put backup and disaster recovery in this category, but for many, having servers with only one NIC card or relying upon aging, unmonitored switches and exposed cable routing are equally good use cases. Sadly, most organizations are not prepared to align risks with other business initiatives. To see where your organization stands in this area, consider techniques such as Forrester’s Lean Business Technology maturity for Business Process Management governance matrix.

6. SLA Litigation
Expectations for service levels are on the rise, and competitive pressures will lead some firms to promise service levels that may not be attainable. Meanwhile, expectations for service levels by the public continue to rise.

7. Treacheries of scale
There will be the network management version of the Quantas QF32 near-disaster. Consequences of failure, especially unanticipated failure, increase as network automation is more centralized. Failure points and cascading dependencies are easily overlooked. For instance, do network management tools identify SPOF? A corollary is that economies of scale (read network scalability) lead directly to high efficiency threats – that is, risks of infrequent but much larger scale outages.

What’s a network administrator to do? Address the issues over which some control can be exerted, and be vigilant about the rest. Too much alarm-sounding is likely to weaken credibility.

PowerShell script for getting Active Directory information

For a work project, I needed to compare Active Directory actual information to what was present in our ERP system, as well as match that with information about the user’s Exchange 2003 mailbox.

I wrote a “down and dirty” PowerShell script to extract a number of fields from Active Directory and write the extracted information into a CSV file. My overall plan was to compare the three data sets–the Active Directory information, the Exchange mailbox information, and the ERP information–using Excel, while making sure there was information in all three data sets that would link the data sets to each other.

Here is more information about the project, followed by the PowerShell script I wrote.

Project details
Our reasons for this project:

  • The organization has 16,000 Exchange mailboxes, and we wanted to ensure that only users who should have mailboxes do.
  • We also wanted to ensure that Active Directory accounts for departed employees are inactive and are marked for removal.

These were the project challenges:

  • In a separate report, I had to use WMI to gather Exchange mailbox information since Exchange 2003 doesn’t include PowerShell.
  • The organization has more than 600,000 user accounts in Active Directory, most of which are valid; only about 20,000 of these accounts are employees, while the rest are customers. However, in some cases, the customers were also temporary employees, so there was a need to search the entire Active Directory database for potential employee accounts.

A look at the PowerShell script
Notes:
This PowerShell script was intended for one-time use and that creates a very different development environment, at least to me. I was going for immediate functionality rather than elegance (I am not a programmer), which is why I consider this a “down and dirty” PowerShell script.

I’ll take a line-by-line (or, in some cases, a section-by-section) look at what this PowerShell script does and explain my thinking.

# Start of script

I needed to clear the screen before script execution to make sure there was no clutter that would confuse me when I looked at display results.

Cls

I added a processing loop to break down the Active Directory information into usable chunks. Prior to adding this loop, my script crashed because the machine on which I was running it ran out of memory trying to handle more than 600,000 records at once. Each item in the “targetou” section is an Active Directory organizational unit. Immediately below, you will see a line that outputs to the screen that OU is currently being processed. By displaying information at run-time, I know exactly where I am in a process.

foreach ($targetou in 'A','B','C','D','E','F','G','GUESTACCOUNTS','H','I','J','K','L','CONTRACTOR',
'M','N','O','P','Q','R','S','T',','U','V','W','X','Y','Z')
{
"Processing information for OU $targetou"

The $targetou variable above is the lowest point in the Active Directory hierarchy at which I worked. The $domainrootpath variable builds the full LDAP string to the OU against which the script was to run for each iteration.

$DomainRootPath='LDAP://OU='+$targetou+',OU=ORGUSER,DC=contoso,DC=com'

The next several lines create and populate an Active Directory searcher object in PowerShell.

$adsearch = New-Object DirectoryServices.DirectoryAdsearch([adsi]$DomainRootPath)

I limited the kinds of objects that would be returned. The line below limits results to user objects.

$adsearch.filter = "(objectclass=user)"

The PropertiesToLoad items below were necessary for the reporting task I had ahead of me. These lines modify the behavior of the Active Directory search by forcing it to return only what is specified rather than returning everything. Because of the size of the data set, I needed to limit the returned data to only what was essential.

$adsearch.PropertiesToLoad.AddRange(@("name"))
$adsearch.PropertiesToLoad.AddRange(@("lastLogon"))
$adsearch.PropertiesToLoad.AddRange(@("givenName"))
$adsearch.PropertiesToLoad.AddRange(@("SN"))
$adsearch.PropertiesToLoad.AddRange(@("DisplayName"))
$adsearch.PropertiesToLoad.AddRange(@("extensionAttribute1"))
$adsearch.PropertiesToLoad.AddRange(@("extensionAttribute2"))
$adsearch.PropertiesToLoad.AddRange(@("comment"))
$adsearch.PropertiesToLoad.AddRange(@("title"))
$adsearch.PropertiesToLoad.AddRange(@("mail"))
$adsearch.PropertiesToLoad.AddRange(@("userAccountControl"))
$adsearch.Container

This line executes the search based on the parameters specified above. For each iteration of the foreach loop, Active Directory will search the organizational unit for that loop and return all of the attributes specified above for each user account. The results of the execution will be stored in the variable named users. Unfortunately, as it exists, the information from this array can’t be simply written to a CSV file since that CSV file would contain only the Active Directory object name and an entry called “System.DirectoryServices.ResultPropertyCollection.” I needed to expand out and capture the individual Active Directory elements, which I do later in the script.

$users = $adsearch.findall()

As the script was running, I wanted to know how many objects were returned from each loop iteration, so I added the line below to show how many user accounts were being handled.

$users.Count

I initialized an array variable into which I’d write the individual Active Directory elements we wanted to capture.

$report = @()

I started another loop that executes for each Active Directory account for which we wanted to capture information.

foreach ($objResult in $users)
{

I needed to create a variable that houses the properties for an individual record. (There are other ways to do this, but I like to break things down to make them more readable.)

$objItem = $objResult.Properties

I created a new temporary object into which to write the various Active Directory attributes for this single record being processed in this processing iteration (remember, this is repeated for each record returned from Active Directory).

$temp = New-Object PSObject

For each individual Active Directory property that was returned from the Active Directory searcher, I added a named property to the temp variable for this loop iteration. Basically, this breaks out the single Active Directory record for a user into its individual components, such as name, title, email address, and so forth. (Case-sensitivity matters in this section.)

$temp | Add-Member NoteProperty name $($objitem.name)
$temp | Add-Member NoteProperty title $($objitem.title)
$temp | Add-Member NoteProperty mail $($objitem.mail)
$temp | Add-Member NoteProperty displayname $($objitem.displayname)
$temp | Add-Member NoteProperty extensionAttribute1 $($objitem.extensionattribute1)
$temp | Add-Member NoteProperty extensionAttribute2 $($objitem.extensionattribute2)
$temp | Add-Member NoteProperty givenname $($objitem.givenname)
$temp | Add-Member NoteProperty sn $($objitem.sn)
$temp | Add-Member NoteProperty useraccountcontrol $($objitem.useraccountcontrol)

I added the results of this individual record to the primary array into which we’re capturing the full results from the search for later export to CSV.

$report += $temp
}

This line creates the name of the file that will be written. I created a new file for each organizational unit processed.

$csvfile="AD-"+$targetou+".csv"

The line writes the entire file to disk and then notifies the user that processing for this OU has completed.

$report | export-csv -notypeinformation $csvfile
"Wrote file for $targetou"
}

Summary
For my purposes, this PowerShell script captured exactly the information that I needed, and I was able to complete my comparison task. If you know of a more elegant way to get this information, please post it in the discussion.

A simple user primer for init

Many Unix-like systems–particularly those that follow the SysV model–make use of the concept of the runlevel. On these systems, runlevels are different modes of operation, some of which can be customized by the system administrator.

In the Linux world, the typical assignment of functionality to runlevels is:

  • 0: system halted
  • 1: single user mode
  • 2: single user mode with networking
  • 3: text-only multi-user mode
  • 4-5: multi-user modes
  • 6: restart

Switching runlevels is simple from the command line. The init command takes a number as an argument that can be used to switch runlevels.

telinit
The actual init daemon starts when the system starts, and manages process startup and shutdown for the current runlevel. When you use the init command within a root user shell, it executes telinit, however. The telinit program can be used to switch to the runlevel corresponding to the numeric argument given to the init command. This means that the command init 0 will shut down the system, init 1 will shut down processes and enter single user mode, and init 6 will restart the system.

Three non-numeric arguments can also be used.

  • The letter q requests that init reload its configuration. It is largely unnecessary in many current Linux-based operating system configurations.
  • The letter s, can be used to enter single user mode as well. Care should be taken when doing so, however; init s does not shut down current processes the way init 1 does.
  • The letter u requests that init re-execute itself.

For the most part, numeric values will be the only arguments you will need to give the init command (and, by extension, the telinit command). In fact, most often you would not need anything but init 0 or init 6, with an occasional need to use init 1. It is typical for Linux-based systems to be set up to automatically boot into the appropriate runlevel for normal operation.

Configuration of which processes are started and stopped with a given runlevel is primarily handled by the contents of /etc/rcN.d directories. Within these directories, symlinks to scripts in the /etc/init.d directory indicate which processes should be started or stopped when entering or leaving a given runlevel.

BSD Unix init
The BSD Unix init command serves a similar role, but it does not use the SysV init system. On BSD Unix systems, init is actually a utility that executes the rc utility. In some ways much like SysV init, BSD rc manages startup of processes on boot. The init command is used with a somewhat different set of arguments, however, because it does not use SysV runlevels:

  • init 0: shut down the system
  • init 1: enter single user mode
  • init 6: restart the system
  • init c: block further logins
  • init q: rescan the ttys file

The q option serves a purpose similar to the same argument to the Linux/SysV version of the init command.

Configuration of the rc system can vary across systems that use it. In the case of FreeBSD, most relevant configuration is handled by the /etc/rc.conf file, and by rc scripts in the /etc/rc.d directory. See the rc.conf manpage for details.

shutdown
Many Unix-like systems provide a shutdown command that performs much the same purpose as certain init commands, and typically adds some convenient features such as sending warnings to user shells, delaying change of operating mode for a specified period of time or at a particular time of day, and kicking all users out of their logins and preventing all new logins. The shutdown command varies from system to system, and its manpage should be consulted for specifics on a given Unix-like OS.

This is not a comprehensive guide
Obviously, an in-depth, comprehensive survey and explanation of the entire system related to the init command is beyond the scope of a single article. With a little bit of enthusiasm and time, however, a lot can be learned about how to manage system operation modes via commands like init and shutdown, and to configure the underlying system, from manpages.

Simple filters in Perl, Ruby, and Bourne shell

In Eric Raymond’s The Art of Unix Programming, he referred to the usefulness of a type of utility called a “filter”: Many programs can be written as filters, which read sequentially from standard input and write only to standard output.

An example provided in the book is of wc, a program that counts characters (or bytes), “words”, and lines in its input and produces the numbers counted as output. For instance, checking the contents of the lib subdirectory for the chroot program files could produce this output:

~/tmp/chroot/lib> ls
libc.so.7    libedit.so.7    libncurses.so.8

You could pipe the output of ls to wc to get the number of lines, words, and characters:

~/tmp/chroot/lib> ls | wc
3    3    39

Writing your own filter scripts is incredibly easy in languages such as Perl, Ruby, and the Bourne shell.

Perl script
Perl’s standard filter idom is quite simple and clean. Some people claim that Perl is unreadable code, but they have probably never read well-written Perl.

#!/usr/bin/env perl

while (<>) {
# code here to alter the contents of $_
print $_;
}

To operate on the contents of a file named file.txt:

~> script.pl file.txt

You can also use pipes to direct the output of another program to the script as a text stream:

~> ls | script.pl

Finally, you can call the script without piping any text stream or naming any file as a command line argument:
~> script.pl

If you do so, it will listen on standard input so that you can manually specify one line of input at a time. Telling it you are done is as easy as holding down [Ctrl] and pressing [D], which sends it the end-of-file (EOF) character.

If you want to do something other than alter the contents of Perl’s implicit scalar variable $_, you could print some other output instead. The $_ variable contains one line of input at a time, which can be used in whatever operations you wish to perform before producing a line of output. Of course, output does not need to be produced within the while loop either if you do not want to. For instance, to roughly duplicate the standard behavior of wc is easy enough:

#!/usr/bin/env perl

my @output = (0,0,0);

while (<>) {
$output[0]++;
$output[1] += split;
$output[2] += length;
}

printf “%8d%8d%8d\n”, @output;

Unlike wc, this does not list counts for several files specified as command line arguments separately, nor list the names of the files in the output. Instead, it simply adds up the totals for all of them at once. This simplistic script does not offer any of wc‘s command line options, either, but it serves to illustrate how a filter can be constructed.

The other examples will only cover the basic filter input handling idiom itself, and leave the implementation of wc-like behavior as an exercise for the reader.

Ruby script
Ruby does not have a single idiom that is obviously the “standard” way to do it. There are at least two options that work quite well. The first uses a Ruby iteratory method, for typically Rubyish style:

#!/usr/bin/env ruby

$<.each do |line|
# code here to alter the contents of line
print line
end

The second uses a while loop, but does not use the kind of “weird” symbol-based variable that some programmers remember only with distaste from Perl:

while line = gets
# code here to alter the contents of line
print line
end

Operating on the contents of a file, taking input interactively, or accepting a text stream as input works the same as for the equivalent Perl script.

Shell script
This is the least powerful filter idiom presented here because the Bourne shell does not provide the same succinct facilities for input handling as Perl and Ruby:

#!/bin/sh

while read data; do
# code here to alter the contents of $data
echo $data

done

To operate on the contents of a file named file.txt, you have to use a redirect, because feeding the script a filename as a command line argument simply results in an error. Calling the script with a redirect is still simple enough, though:

~> script.sh < file.txt

The redirect character < is used to direct the contents of file.txt to the script.sh process as a text stream. You can also use pipes to direct the output of another program to the script as a text stream, as with the other examples:

~> ls | script.sh

While the behavior you see with the Perl and Ruby examples can be duplicated using the Bourne shell, it requires a bit more code to do so, using a conditional statement to deal with cases where the filename is provided as a command line argument without the redirect as well as where a text stream is directed to the program by some other means. It hardly seems worth the effort to avoid using a redirect.

Go forth and code
In my TechRepublic article Seven ideas for learning how to program, I suggested that writing Unix admin scripts could serve as a great way for new programmers to practice the craft of coding. Filters are among the most useful command line utilities in a Unix environment, and as demonstrated here, they can be surprisingly easy to write with a minimum of programming skill.

Regardless of your programming experience, these simple filter script idioms in three common sysadmin scripting languages can help any Unix sysadmin do his or her job better.

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

How to add a watermark to your Word documents

Microsoft Word


How to add a watermark to your Word documents

A watermark is a picture or text that appears behind a document’s contents. It’s usually a light grey or other neutral color so it doesn’t distract too much from the document’s purpose. Usually, a watermark identifies a company or the document’s status. For instance, a watermark might say confidential, urgent, or display a symbolic graphic. Adding a watermark to a Word document is a simple process:

  1. Click the Page Layout tab.
  2. Click Watermark in the Page Background group.
  3. Choose a watermark from the gallery. Or…
  4. Choose Custom Watermark. The Printed Watermark dialog presents three options. You can remove a custom watermark or insert a picture or text as watermark.
  5. Click OK once you’ve made your selections.

If you're using Word 2003, add a watermark as follows:

  1. From the Format menu, choose Background.
  2. Click Printed Watermark.
    To insert a picture as a watermark, click Picture Watermark. Then click Select Picture, navigate to find the picture file, and click Insert.
    To insert a text watermark, click Text Watermark and select or enter the text you want.
  3. Set any additional options.
  4. Click OK.

The watermark will display as part of the background on every page. Adding a watermark to a document is simple, yet effective.

Microsoft Excel


Excel parsing expressions

You probably wouldn’t store first and last names in the same cell, but you might have to work with a legacy workbook that does. Or, you might import data from a foreign source where the names are combined into one field. Fortunately, Excel has several string functions, Right(), Left(), Find(), Len(), and Mid() that can parse the name components into separate entries.

First, the easy part; parse the component to the left using the simple expression:
=LEFT(A2,FIND(” “,A2)-1)

It makes no difference whether the component is the first or last name. In the case of Robin Banks, the FIND() function returns the value 6, but the expression subtracts 1 from the results. Consequently, the expression extracts the first five characters. If you want to extract the space character, omit the -1 component.

The inconsistency of the entries–some have middle initials and some don’t–makes extracting the last name, a bit more complicated. You might try the following expression, but as you can see, it doesn’t work as expected:
=RIGHT(A2,LEN(A2)-FIND(” “,A2,FIND(” “,A2)+1))

If the entry doesn’t contain two space characters, the second FIND() returns an error value. Use the following expression instead:
=IFERROR(RIGHT(A2,LEN(A2)-IFERROR(FIND(” “,A2,FIND(” “,A2)+1),FIND(” “,A2))),A2)

IFERROR() handles the errors, but the logic is similar.

There’s one last step–returning  the middle initial:
=MID(A2,FIND(” “,A2)+1,IFERROR(FIND(” “,A2,FIND(” “,A2)+1)-FIND(” “,A2)-1,0))

If there’s no middle initial, this expression returns an empty string instead of an error.

It’s worth mentioning that the Text To Columns feature is an expression-less solution if the entries are consistent. In addition, to learn more about using string functions, read Save time by using Excel’s Left, Right, and Mid string functions. Finally, IFERROR() is new to Excel 2007. The logic for these expressions is the same in 2003, but use ISERROR() to handle the error values.

Microsoft Access


Access parsing expressions

In the Excel post above, I showed you a few expressions for parsing inconsistent name entries. The logic of relying on the position of specific characters is just as useful in Access, although Access doesn’t use the same functions.

The Access table below stores names in firstname lastname format in a single field named Name. Some, but not all entries have middle initials. Using the following expression, extracting the first name is fairly simple:
FirstName: Left([Name],InStr([Name],” “)-1)

The InStr() function returns the position of the first space character. Consequently, the Left() function extracts characters from the beginning of the entry, up to the first space character. Omit the -1 component if you need to include the space character.

Extracting the last name takes just a bit more work:
LastName: Right([Name],Len([Name])-InStrRev([Name],” “))

This expression applies the same logic, plus some. The length of the entire name minus the position of the last character returns the name of characters to extract, beginning with the last character. Using Robin Banks, this expression evaluates as follows:
Right(“Robin Banks”,11-6)
Right(“Robin Banks”,5)
Banks

As you might suspect by now, extracting the middle initial takes even more work:
MI: IIf(InStrRev([Name],” “)>InStr([Name],” “),Mid([Name],InStr([Name],” “)+1,InStr([Name],” “)-2),””)

The IIf() function compares the position of the first space character and the second space character. If they’re the same, there’s only one space character and consequently, no middle initial (and I could’ve written the condition that way, just as easily). If the position of the last space character is greater than the position of the first space character, there’s a middle initial (or something!) between the first and last names. The Mid() function then uses the position of the first space character to extract two characters between the first and last names. Those two characters, in this case, are the middle initial and the period character following each initial. If some names have a period character and some don’t, this expression will return inconsistent results. Using Dan D. Lyons, this expression evaluates as follows:

IIf(7>4,Mid("Dan D. Lyons",4+1,4-2)," ")
IIf(True,Mid("Dan D. Lyons",5,2)," ")
Mid("Dan D. Lyons",5,2)
D.

When parsing inconsistent data, you have to find some kind of anchor. In this example, the anchor is the position of the space characters. It’s important to note that the ” ” component in all of the expressions is not an empty string. There’s a literal space character between the two quotation marks.

Specify a failover host for HA clusters in VMware

VMware vSphere’s High Availability (HA) feature allows virtual machines to be restarted on other hosts in the event of a host failure. I have had a love-hate-hate-love-hate relationship with HA throughout the years; I’m keeping score of how many times it has saved me compared to biting me in the rear end.

Putting my mixed feelings about the feature aside, I recently gravitated towards a new configuration option for HA clusters in certain situations. The option to specify a failover host for HA clusters allows a specific ESXi (or ESX) host to be designated as the host to absorb the workload on the failed ESXi host. This option is a property of an HA cluster (Figure A).

Figure A


Click the image to enlarge.

This option is set for a test cluster of only two hosts, but some of the attributes are visible quite easily. First, the vesxi4.rwvdev.intra host is designated as the HA failover host; this means that virtual machines are not intended to run on that host in a normal running configuration. This is at the expense of the other host, because there is one extremely busy host and one relatively idle host.

The use of the designated failover host offers the opportunity for administrators to capture some benefits compared to the other HA options. The first option is that you could place a lower-provisioned host in the admission control inventory. This can include using a 2 CPU (socket) host instead of a 4 CPU host that would exist in the rest of the cluster, thus reducing licensing costs. Another benefit is each host that is not the failover host would be allowed to go higher in its utilization, as an admission control policy would not prohibit additional virtual machines on that host.

There are a number of critical decision points on HA, but I would be remiss if I did not mention what I feel to be the authoritative resource for this feature: the HA Deepdive from Duncan Epping’s Yellow Bricks blog. Duncan has good information about all of HA, including the designated failover host option.

Probably the best use case for using HA and designating a failover host is to set individual virtual machine HA event response rules. A good example of this would be to not perform an HA failover on development virtual machines, should they be intermixed in a cluster. Figure B shows this configured in an HA cluster where all test and development virtual machines are configured to not have an HA event restart.

Figure B


Click the image to enlarge.

This is the proverbial “it depends” configuration item. There are plenty of factors that go into considering this HA cluster arrangement, but the designated failover option doesn’t seem to be used that frequently.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

Create an easy to use Linux calendar sharing server

In my ever-continuing quest to bring Linux to business, I found one of the biggest missing pieces was the ability for Linux mail clients to easily share out calenders with other Linux users. Most of the Linux mail clients (Evolution, Thunderbird, etc) offer the ability to publish calendars or use remote calendars.

Although it’s a fairly simple task to share those calendars out, the task of correctly setting up a connecting calendar server is not. That is, unless you happen upon Radicale CalDAV Server. This particular calendar server is about the easiest CalDAV server I have ever installed and used.

Radicale can share calendars with most open source calendar tools and features:

  • Shares calendars using CalDAV or HTTP.
  • Supports events and todos.
  • Works out-of-the-box with little to no configuration required.
  • Warns users on concurrent edition.
  • Limits access by authentication.
  • Secures connections.

Let’s take a look at how Radicale can be set up on a Ubuntu 10.10 machine

Step 1: Installation
To install Radicale on Ubuntu simply open up the Ubuntu Software Center, search for radicale, and click Install. You will need to enter your sudo password for the installation to complete. When the software is installed you can close out the Software Center and start working with Radicale.

If you are installing in a non-Ubuntu distribution you might have to install from source. You will want to make sure you have Python installed.

Step 2: Configuration
Believe it or not, this step is optional, as Radicale should work out of the box for you. On my Ubuntu machine hosting the Radicale Server, no configuration was necessary. But more than likely you are going to want to set up some configuration options (such as authentication). To do this, the file ~/.config/radicale/config must be edited (or created, if it’s not there).

The default configuration file looks like:

[server]
# CalDAV server hostname, empty for all hostnames
host =
# CalDAV server port
port = 5232
# Daemon flag
daemon = False
# SSL flag, enable HTTPS protocol
ssl = False
# SSL certificate path (if needed)
certificate = /etc/apache2/ssl/server.crt
# SSL private key (if needed)
key = /etc/apache2/ssl/server.key

[encoding]
# Encoding for responding requests
request = utf-8
# Encoding for storing local calendars
stock = utf-8

[acl]
# Access method
# Value: fake | htpasswd
type = fake
# Personal calendars only available for logged in users (if needed)
personal = False
# Htpasswd filename (if needed)
filename = /etc/radicale/users
# Htpasswd encryption method (if needed)
# Value: plain | sha1 | crypt
encryption = crypt

[storage]
# Folder for storing local calendars,
# created if not present
folder = ~/.config/radicale/calendars

The above configuration should be fairly obvious. Just make the changes that suit your needs and save the file.

Once you have the configuration saved (or you need no configuration), all you have to do is start the Radicale daemon with the command radicale. You might want to set this to start up automatically. From within GNOME you can do this by clicking System | Preferences | Startup Applications and adding the radicale command.

Creating (or connecting to) calendars
It is very simple to create or connect to Radicale from both Evolution and Thunderbird (with the Lightning addon). When connecting to (or creating) a new calendar you will be using a Network calendar with the following addresses:

For Thunderbird:

http://ADDRESS_TO_CALSERV:5232/USER/CALENDAR

For Evolution:

caldav://ADDRESS_TO_CALSERV:5232/USER/CALENDAR

Where ADDRESS_TO_CALSERV, USER, and CALENDAR are all unique to your system. If the calendar you want to connect to already exists just check inside the user’s (the user that starts the daemon on the target machine) ~/.config/radicale/ directory for this information. NOTE: Both calendar types will be CalDAV.

That’s all there is to it. You will now be able to add/view entries on the calendar(s) on the server. The only pitfall is that you have to manually refresh the calendars in order to see changes. That’s a small price to pay for such simplicity.

Jack Wallen was a key player in the introduction of Linux to the original Techrepublic. Beginning with Red Hat 4.2 and a mighty soap box, Jack had found his escape from Windows. It was around Red Hat 6.0 that Jack landed in the hallowed halls of Techrepublic.

5 tips for effectively managing your software assets

Properly tracking and organizing software licenses are major responsibilities of every IT manager. Organizations that have a clear understanding of their software assets and how they are utilized will be equipped to remain license-compliant and to make better purchasing decisions. Managing software assets effectively can also save enterprises thousands of dollars per year.

But many businesses have discovered that their software asset records are neither accurate nor current. Although the awareness of the importance of true asset management has increased, organizations often don’t do an adequate job of managing the risk associated with being noncompliant. Many businesses also purchase unnecessary, excess application licenses, which could result in overspending and inaccurate budgeting.

To avoid noncompliance risk and reduce software costs, businesses need to deploy a software asset management program that includes a process to ensure all applications are appropriately recorded and categorized. Here are five tips to help IT managers meet this challenge.

1: Automate the process
IT departments have historically placed an emphasis on enterprise efficiency by relying on learned best practices. But when tracking software assets, many administrators rely on antiquated manual tools while running from computer to computer. An automated solution reduces the excessive time spent on managing software assets while eliminating the manual reporting processes. With the greater insight into software allocation, as well as usage and license compliance, IT is prepared for vendor and internal inquiries. IT departments can also proactively make accurate software budget recommendations and assignments.

2: Integrate with asset management
To be cost effective and easy to use, software license management tools must be integrated into an organization’s overall asset management solution. This solution should also include software distribution, OS deployment, patch management, and remote management, since all these challenges are so closely related. Having integrated solutions has become increasingly important as IT departments face external vendor audits and internal budget cuts. An automated software license management solution that is a part of an overall asset management plan helps businesses improve efficiency and remain compliant while reducing software purchases and support costs.

3: Prepare for vendor audits
Technology vendors have recently increased their efforts to eliminate the unsanctioned use of software by performing surprise audits. Removing installed software or purchasing more licenses after an audit notification has been given is one of the worst mistakes you can make when dealing with an auditor. Organizations should conduct practice audits on a regular basis using software license management tools. In addition, they should designate a response team to ensure that their software license management practices are enough to pass an audit. An automated solution provides fast and clear access to application portfolios by generating detailed reports at any time.

4: Align software purchases, contracts, and support
Underutilized software wastes IT dollars. A software license management program can help you accurately plan your budget and gives you accurate insight into software usage. It should not only help you find out what licenses you currently have but also show you how often they are being used and by whom. Effective software management tools enable IT to free up software and negotiate the purchase price of software products. They can also help you develop a comprehensive strategy for aligning purchases, contracts, and support. This in turn avoids unnecessary purchases and keeps maintenance costs to a minimum.

5: Rely on an easy-to-implement tool that offers a one-year ROI
When researching options, look for software license management tools that are easy to implement and for which the solution provider can demonstrate a return on investment within the first year. Leading solutions should offer cost-efficient controls as well as compliance monitoring by combining processes, resources, and regulatory requirements into a single management framework. Also look for a solution that provides easy-to-read, customizable, on-demand dashboard reports to assist with vendor audits and to gain a greater understanding of product usage.

With the right solution, IT departments can avoid the risk of noncompliance using a process that does not strain staff resources. IT administrators can also improve the company’s bottom line by saving thousands of dollars per year in licensing fees.

Adee McAninch is the product marketing manager at Numara Software. This article first appeared in ZDNet Asia’s sister site TechRepublic.com.

Test your DNS name servers for spoofability

What does DNS cache poisoning mean to us? A lot. Using cache poisoning, bad guys can redirect Web browsers to malicious Web sites. After that, any number of bad things can happen.

DNS primer
Being human, it’s easier to remember names than numbers. But, computers prefer numbers. So we use a process called Domain Name System (DNS) to keep track of both. It translates domain names into numeric addresses.

Let’s use Web browsers as an example. The user types in the name of a Web site and hits enter. The Web browser sends a DNS query to the DNS name server being used by the computer. The DNS name server checks its database for the Web site’s name and responds with the associated IP address. With the IP address, the Web browser retrieves the Web site.

Too predictable

Two more pieces of the puzzle are needed to understand how Dan Kaminsky can poison DNS server caches. They are:

  • The query transaction ID (allows DNS responses to be matched to DNS queries) is incremented in numeric order and always uses the same port.
  • Applications using DNS explicitly trust the domain name/IP address exchange.

The predictability and blind acceptance allowed him to:

  • Create a rogue DNS response.
  • Send it to the computer or DNS name server asking for the associated IP address.
  • The DNS response is accepted as long as the query transaction ID match and it was received before the authoritative DNS name server’s response.

After the dust settled, Kaminsky realized this technique could be used to redirect web browsers to malicious Web sites.

Increase randomness

To prevent redirection, Kaminsky came up with an elegant solution. There are 2 to the power of 16 possible query transaction IDs and 2 to the power of 16 possible source ports. Why not randomize query transaction IDs. He also suggested using random source ports instead of the same one each time.

If you mix up the selection process for each, the number of potential combinations becomes 2 to the power of 32. That makes it sufficiently difficult to guess.

Okay, we have a solution. But, as I alluded to earlier, not all DNS name servers are using the prescribed fixes. Thankfully, there are ways to tell if the DNS name server is updated.

Testing for spoofability

I was listening to a Security Now podcast with Steve Gibson and Leo Laporte. The topic was “Testing DNS Spoofability”. In the broadcast, Gibson mentioned he developed an online test to see if DNS name servers are susceptible to cache poisoning.

The test is called DNS Nameserver Spoofability Test. The program exchanges a large quantity of DNS queries between the DNS name server being tested and what Gibson calls a Pseudo DNS Nameserver (courtesy of GRC.com):

The reason so many queries are needed is to accurately test the randomness of the query transaction ID and source port selection.

Router Crash Test

During development of the spoofability test, Gibson encountered something. The test was crashing certain consumer-grade routers. This link is to the list of routers that do crash. The Web page also explains why this is occurring.

Scatter charts

I use OpenDNS for my DNS servers. The following slide shows OpenDNS employs the fixes, creating a random scatter chart:

The next slide (courtesy of GRC.com) represents a DNS server using a selection algorithm that is far less random:

The final example (courtesy of GRC.com) is telling. Both the query transaction ID and the source port are being incremented in a linear fashion. Although the values are changing, it is in a predictable fashion. Not good.

Find a public DNS provider

There are alternatives if you find the assigned DNS name servers are not randomizing the entries sufficiently. I mentioned earlier, that I use OpenDNS. It is free, and it is the only public DNS service that offers protection from DNS rebinding attacks. This GRC.com Web page has a list of other public DNS providers.

Final thoughts

To avoid problems resulting from being redirected to a malicious Web site, please test the DNS name servers used by your computer.

Michael Kassner has been involved with IT for over 30 years and is currently a systems administrator for an international corporation and security consultant with MKassner Net.

Backdoor ways to reboot a Windows server

When you need to reboot a Windows server, you’ll occasionally encounter obstacles to making that happen. For instance, if remote desktop services aren’t working, how can you reboot the server? Here is a list of tricks I’ve collected over the years for rebooting or shutting down a system when I can’t simply go to the Start Menu in Windows.

  • The shutdown.exe command: This gem will send a remote (or local) shutdown command to a system. Entering shutdown /r /m \\servername /f /t 10 will send a remote reboot to a system. Shutdown.exe is current on all modern Windows systems; in older versions, it was located on the Resource Kit. For more details, read this Microsoft KB article on the shutdown.exe command.
  • PowerShell Restart-Computer: The equivalent of the command above in PowerShell is:
    Start-Sleep 10
    Restart-Computer -Force -ComputerName SERVERNAME
  • Hardware management device: If a device such as an HP iLO or Dell DRAC is in use, there is a virtual power button and remote screen console tool to show the system’s state regardless of the state of the operating system. If these devices are not configured with new servers, it’s a good idea to have them configured in case the mechanisms within the operating system are not available.
  • Virtual machine power button: If the system in question is a virtual machine, all hypervisors have a virtual power button to reset the system. In VMware vSphere, be sure to select the option to Shut Down The Guest Operating System instead of the Power Off; this will make the call to VMware Tools to make it a clean shutdown. If that fails, the Power Off button will be the next logical step.
  • Console walkthrough: In the situation where the server administrator does not have physical access to the system, walking someone through the process may be effective. For security reasons, basically a single user (domain or locally) can be created with the sole permission of rebooting the server. That person could log on as this temporary user, and then it is immediately destroyed after the local shutdown command is issued. Further, that temporary user could be created with a profile to run the reboot script on their logon to not have any interaction by the person assisting the server administrator.
  • Configure a scheduled task through Group Policy: If you can’t access the system in any other mainstream way–perhaps the Windows Firewall is turned on and you can’t get in to turn it off–set a GPO to reconfigure the firewall state and slip in a reboot command in the form of the shutdown.exe command executing locally (removing the /m parameter from above). The hard part will be getting the GPO to deploy quickly.
  • Enterprise system management packages: Packages such as Symantec’s Altiris and Microsoft System Center agents communicate to the management server and can receive a command to reboot the server.
  • Pull the plug: This is definitely not an ideal approach, but it is effective. For physical servers, if a managed power strip with port control is available, a single system can have its power removed and restored.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

RIM promises to set up filters in Indonesia

Setting up Internet filters in Indonesia is “top priority” for Research In Motion (RIM) and the company is cooperating with the local government and carriers to implement porn blockers on its BlackBerry services.

In an e-mail statement to ZDNet Asia, the Canadian phonemaker said it has been in discussions with the government and carriers in Indonesia to set up an Internet filter and remains committed to implement satisfactory technical solutions with its partners.

According to a report Monday by BusinessWeek, Indonesia Communication and Information Technology Minister Tifatul Sembiring said RIM has until Jan. 21 to begin filtering porn sites or face legal actions including revocation of its services.

Muhammad Budi Setyawan, the ministry’s director general of post and telecommunications at the ministry, said the government is not targeting RIM but pornographic Web sites and will meet with RIM and six mobile service providers on Jan. 17 with its request to filter pornographic content. He noted that if RIM refuses to filter such materials, the company might be asked to to shut down its browser service.

According to RIM’s statement, the BlackBerry maker agrees with the ministry’s “sense of urgency” on the issue.

Tifatul since last August has ordered Internet service providers in Indonesia to block pornographic Web sites.

The move, according to Reporters Without Borders, was sparked by the circulation of videos reportedly revealing local celebrities having sex, leading to critics blaming the Internet for declines in values in the Indonesian society.

The government’s demand for anti-porn Web filters has been met with dissent by some citizens, noted a report by AFP, which quoted a Twitter user who questioned if blocking BlackBerry services would be effective in reducing the flow of pornographic content in the country.

According to the The Jakarta Post, Tifatul also outlined other demands on his Twitter account, such as setting up a server in the country for law enforcement officials to track down corruption suspects.

Indonesia is considered an important market for RIM in the Asia-Pacific region and often singled out as a success story for the BlackBerry maker. In a ZDNet Asia report last April, RIM’s Asia-Pacific director, Gregory Wade, pointed to how prepaid BlackBerry services in the country had played a key role in boosting the company’s growth in Indonesia.

Analyst: Get ready for all-in-one app stores

With more devices equipped to access app stores for content, the market will see the emergence of an all-in-one app store able to recognize the type of device used and push apps relevant to that platform, observes an analyst who points to major players such as Apple and Google which are heading in that direction.

Bryan Wang, Asia-Pacific associate vice president of connectivity at Springboard Research, noted that consumers today own multiple devices and would want to be able to share their applications across these devices to have a seamless mobile Internet experience.

“We do believe that there will be a market for one-stop app shops able to recognize the device accessing it and push relevant content to end-users,” Wang said in an e-mail interview.

To this end, he pointed to operators such as Apple and Google which he said were already heading in that direction. With Apple launching its Mac App Store on Jan. 6 and Google unveiling its Chrome Web Store a month earlier on Dec. 7, both companies are extending the mobile app environment into the desktop arena, he added.

“[Opening up these desktop app stores] is one of the steps for Apple and Google to move to a one-stop shop direction,” the Springboard analyst said. “When PC app stores get larger market traction in the next year or two, we think it would be natural [for such vendors] to have their current mobile and desktop app stores combined.”

Wang noted that Apple and Google currently are the only two operators that have “the capability to attract large volumes of customers in the next couple of years”, a component that is necessary for a one-stop app shop to flourish.

The potential of a one-stop app shop also drew a positive response from Malcolm Lu, a product packaging designer, and a user of Apple’s MacBook Pro, iPad slate device and Google’s Chrome Web Store.

He told ZDNet Asia that he is “for the idea” of having an all-in-one app store as it would help him save time in looking over apps that are compatible with his respective devices.

Multi platforms not so soon
Lu, however, said it is “doubtful” a “universal” app store that is able to cut across different platforms and devices will be available any time soon.

He noted that mobile platform operators today are focused only on introducing and maintaining their own respective app stores, such as Apple with the iTunes App Store and Research In Motion with its BlackBerry App World. And this trend does not seem to be ending in the near future, he added.

Furthermore, there are differences in programming apps for the various mobile platforms and multiple devices that apps run on, making it “harder to develop a universal app store”, he noted.

Wang agreed that universal app stores will not see light of day in the near future. He noted that in order to create a successful platform- and device-agnostic one-stop app store, the operator must already have an established brand name and customer buy-in for its existing services.

The operator should also have a multi-vendor, multi-technology approach in its business strategy in order to want to create the one-stop app store to begin with, and there are not many such companies in the market today, he added.

Wang identified Facebook and Samsung as two players that could potentially fulfill one or both factors, but whether the companies would eventually set up a universal app store remains to be seen.

Industry trends appear to support these observations. Besides smartphones, tablets and desktops, carmakers are also jumping onto the app store bandwagon, further complicating the app store landscape.

The Wall Street Journal reported on Saturday that automobile manufacturers such as General Motors and Toyota have announced plans to transform dashboards into Internet-connected vehicles. General Motors, for instance, expanded its OnStar system, which was first developed to provide directions and emergency services, to include apps that access the car system and push information such as vehicle diagnostics to car owners.

According to Gartner analyst, Thilo Koslowski, the auto industry’s focus on apps comes as carmakers look for new ways to differentiate their products from the competition. He said in the Wall Street Journal report: “Internet-connected autos will be among the fastest-growing segments in four years.” Koslowski also predicted that more than half of all new premium vehicles in the United States will support apps by 2013 and mass-market cars will reach that level in 2016.

5 tips for easy Linux application installation

Most people don’t realize how easy it is to install applications on modern releases of the Linux operating system. As the package managers have evolved into powerful, user-friendly tools, the task of installation has become equally user-friendly. Even so, some users encounter traps that seem to trip them up at every attempt.

How can you avoid these traps and be one of those Linux users happily installing application after application? With these five tips, that’s how.

1: Get to know your package manager
Probably the single most user-friendly package management system, on any operating system, is the Ubuntu Software Center. This tool is simply an evolution of the typical GUI front end for Linux package management systems. All you have to do is open that tool, search for the application you want to install, mark it for installation, and click Apply. And because there are thousands upon thousands of applications available, you can happily spend hours upon hours finding new and helpful applications to install.

2: Install the necessary compilers
If you have an application thatmust be installed from source, you will need to have the necessary compilers installed. Each distribution uses either a different compiler or a different release of a compiler. Some distributions, such as Ubuntu, make this task simple by having a single package to install (issue the command sudo apt-get install build-essential). Once you have the compiler installed, you can then install applications from source.

3: No .exe allowed
This is one of those concepts that is so fundamental, yet many users don’t understand it. The .exe installers are for Windows only. For Linu,x you are looking for extensions such as .deb or .rpm for installation. The only way to install .exe files on a Linux machine is with the help of WINE, but most new users should probably steer clear of this tool. If you find a binary file online (one that works with your distribution), you should be prompted by your package manager if you want to install the downloaded file. If you have WINE installed,and your system is configured correctly, you will prompted (with the help of WINE) to install even .exe files.

4: Understand dependencies
This is probably one of the trickiest aspects of installing packages in Linux. When using a package manager (such as PackageKit, Synaptic, or Ubuntu Software Center) the dependencies are almost always taken care of automatically. But if you are installing from source, you will have to manually install the dependencies. If you don’t get all the dependencies installed (and installed in the correct locations), the application you are installing will not work. And if you try to force the installation (without installing all dependencies), the application will not work properly.

5: Always start with the package manager
There are several reasons why distributions use package managers. Outside of user-friendliness, the single most important reason for package managers is to ensure system cohesiveness. If you use a patchwork of installation methods, you can’t be sure that your system is aware of everything installed. This is also true for tools like Tripwire, which monitor changes in your system. You want to be as uniform and as standardized as you can in your installations. To that end, you should ALWAYS start with your package manager. Only when you can’t find a precompiled binary for your distribution should you turn to installing from source. If you remain consistent with this installation practice, your system will run smoother longer. If you mix and match, you might find some applications are not aware of other applications, which can really cause dependency issues.

Simple and friendly
Users do not have to fear installing applications on Linux. By following some simple guidelines, anyone (regardless of experience level) can have an easy time managing their Linux desktop. With powerful, accessible package managers, nearly every modern Linux distribution offers the user every tool they need to add, remove, and update their applications with ease and speed.

Using OData from Windows Phone 7

My initial experiences with Windows Phone 7 development were a mixed bag. One of the things that I found to be a big letdown was the restrictions on the APIs and libraries available to the developer. That said, I do like Windows Phone 7 development because it allows me to use my existing .NET and C# skills, and keep me within the Visual Studio 2010 environment that has been very comfortable to me over the years. So despite my initially poor experience in getting starting with Windows Phone 7, I was willing to take a few more stabs at it.

One of the apps I wanted to make was a simple application to show the local crime rates. The U.S. government has this data on Data.gov, but it was only available as a data extract, and I really did not feel like building a Web service around a data set, so I shelved the idea. But then I discovered that the “Dallas” project had finally been wrapped up, and the Azure Marketplace DataMarket was live.

Unfortunately, there are only a small number of data sets available on it right now, but one of them just happened to be the data set I wanted, and it was available for free. Talk about good luck! I quickly made a new Windows Phone 7 application, and tried to add the reference, only to be stopped in my tracks with this error: “This service cannot be consumed by the current project. Please check if the project target framework supports this service type.”

It turns out, Windows Phone 7 launched without the ability to access WCF Data Services. I am not sure who made this decision, seeing as Windows Phone 7 is a great match for Azure Marketplace DataMarket, it’s fairly dependent on Web services to do anything useful, and Microsoft is trying to push WCF Data Services. My initial research found only a CTP from March 2010 to provide this information. I asked around and found out that code to do just this was made announced at PDC recently and was available for free on CodePlex.

Something to keep in mind is that Windows Phone 7 applications must be responsive when performing processing and must support cancellation of “long running” processes. In my experience with the application certification process, I had an app rejected for not supporting cancellation even though it would take at most three seconds for processing. So now I am very cautious about making sure that my applications support cancellation.

Using the Open Data Protocol (OData) library is a snap. Here’s what I did to be able to use an OData service from my Windows Phone 7 application:

  1. Download the file ODataClient_BinariesAndCodeGenToolForWinPhone.zip.
  2. Unzip it.
  3. In Windows Explorer, go to the Properties page for each of the DLLs, and click the Unblock button.
  4. In my Windows Phone 7 application in Visual Studio 2010, add a reference to the file System.Data.Services.Client.dll that I unzipped.
  5. Open a command prompt, and navigate to the directory of the unzipped files.
  6. Run the command: DavaSvcUtil.exe /uri:UrlToService /out:PathToCSharpFile (in my case, I used https://api.datamarket.azure.com/Data.ashx/data.gov/Crimes for the URL and .\DataGovCrime.cs for my output file). This creates a strongly typed proxy class to the data service.
  7. I copied this file into my Visual Studio solution’s directory, and then added it to the solution.
  8. I created my code around cancellation and execution. Because I am not doing anything terribly complicated, and because the OData component already supports asynchronous processing, I took a backdoor hack approach to this for simplicity. I just have booleans indicating a “Running” and “Cancelled” state. If the event handler for the service request completion sees that the request is cancelled, it does nothing.

There was one big problem: The OData Client Library does not support authentication, at least not at a readily accessible level. Fortunately, there are several workarounds.

  • The first option is what was recommended at PDC: construct the URL to query the data manually, and use the WebClient object to download the XML data and then parse it manually (using LINQ to XML, for example). This gives you ultimate control and lets you do any kind of authentication you might want. However, though, you are giving up things like strongly typed proxy classes, unless you feel like writing the code for that yourself (have fun).
  • The second alternative, suggested by user sumantbhardvaj in the discussion for the OData Client Library, is to hook into the SendingRequest event and add the authentication. You can find his sample code on the CodePlex site. I personally have not tried this, so I cannot vouch for the result, but it seems like a very reasonable approach to me.
  • Another alternative that has been suggested to me is to use the Hammock library instead.

For simple datasets, the WebClient method is probably the easiest way to get it done quickly and without having to learn anything new.

While it is unfortunate that the out-of-the-box experience with working with OData is not what it should be, there are enough options out there that you do not have to be left in the cold.

Disclosure of Justin’s industry affiliations: Justin James has a contract with Spiceworks to write product buying guides; he has a contract with OpenAmplify, which is owned by Hapax, to write a series of blogs, tutorials, and articles; and he has a contract with OutSystems to write articles, sample code, etc.

Justin James is an employee of Levit & James, Inc. in a multidisciplinary role that combines programming, network management, and systems administration. He has been blogging at TechRepublic since 2005.

Change Outlook’s Calendar color to better highlight the current day

Microsoft Office Outlook


Change Outlook’s Calendar color to better highlight the current day

In Outlook’s Month view, the current day is a bit washed out. As you can see below, the default blue is a tad lighter than other highlighted areas.

It isn’t impossible to find, but it does seem to fade into the background. (It’s even more obscure in Outlook 2003.)

The color is in keeping with the theme, but if you want the current day to pop out a bit, try changing the default color.

In Outlook 2003 and 2007, do the following to change this property:

  1. From the Tools menu, choose Options and click the Preferences tab (if necessary)
  2. On the Preferences tab, click Calendar Options in the Calendar section.
  3. In the Calendar Options section, choose a new color from the Default Color dropdown.
  4. Click OK.

If you’re using Outlook 2010, do the following:

  1. Click the File menu and then choose Options.
  2. Click Calendar in the left pane.
  3. In the Display Options section, choose a new color from the Default Calendar Color dropdown.
  4. Click OK.

This property changes the Calendar color, not the selected day, so it’s a big change. You’ll want to choose a color that contrasts with the current day’s border, as shown above. In this case, the orange border is easy to see next to the green–at least, I think it is. Personal preference strongly figures into this particular choice.

This tip won’t set your world on fire or anything. It’s just one of those simple things that you can control, so you should if if makes your day a bit easier!

Microsoft Word


Use Word’s Replace to transpose a column of names

You’ll often see a column of names entered in a Word document either as a list or part of a table. Listing the names is no problem, but changing their order after they’re entered could be.

For instance, let’s say your document contains a list of names entered in firstname lastname format, but you want them in lastname, firstname format. Do you have to re-enter them? No, there’s a simple wildcard trick you can use with Word’s Replace feature that will take care of the transposing for you.

To get Word to transform a list or column of names, do the following:

  1. Select the list of names you want to transpose.
  2. From the Edit menu, choose Replace. In Word 2010, click Replace in the Editing group on the Home tab.
  3. Click the More button and check the Use Wildcards option. This is an important step–if you miss it, this technique won’t work.
  4. In the Find What control, enter (<*>) (<*>), with a space character between the two sets.
  5. In the Replace With control, enter the following characters \2, \1, with a space character before the second slash character.
  6. Click Replace All. Word will transpose the first and last names and separate them with a comma character.
  7. When Word asks you to expand the search, click No, and then Close to return to the document.


Wildcard explanation

Once you understand the wildcards, the whole trick is easily exposed:

  • (): The parentheses aren’t true wildcards, not in a matching sense. They allow you to divide a pattern into logical sequences.
  • <>: The brackets mark the beginning and ending of a word or phrase.
  • \: The slash character replaces characters, and is used with a number that specifies a bracketed component (above).

In this case, the Find What code splits the two names into two separate sequences. The \2 component in the Replace What code replaces the contents of the first sequence with the contents of the second sequence. The \1 component replaces the contents of the second sequence with the contents of the first. As you can see, you’re not limited to just transposing first and last names. With these wildcard tools, you can rearrange quite a bit of content!

Microsoft Excel


How to sum values in an Excel filtered list

Filters are a powerful and easy-to-use feature. Using filters, you can quickly limit data to just the records you need to see. Summing filtered records is another matter. You might try a SUM() function but you might get a surprise–well, I can promise you’ll get a surprise.

The figure bellows shows a filtered list. You can tell by the row numbers to the left that many rows are hidden. (We’ll skip how the actual filter works. To learn more about that, read How to use And and Or operators with Excel’s Advanced Filter.

The next figure shows what happens when you try to sum the filtered values. You can easily tell that the result isn’t correct; the value is too high, but why? The SUM() function is evaluating all the values in the range D14:D64, not just the filtered values. There’s no way for the SUM() function to know that you want to exclude the filtered values in the referenced range.

The solution is much easier than you might think! Simply click AutoSum–Excel will automatically enter a SUBTOTAL() function, instead of a SUM() function. This function references the entire list, D6:D82, but it evaluates only the filtered values.

About SUBTOTAL()

Although the SUBTOTAL() function references the entire list of values in column D, it evaluates only those in the filtered list. You might think that’s because of the first argument, the value 9. This argument tells Excel to sum the referenced values. The following table lists this argument’s acceptable values:

Evaluates hidden values Ignores hidden values Function
1 101 AVERAGE()
2 102 COUNT()
3 103 COUNTA()
4 104 MAX()
5 105 MIN()
6 106 PRODUCT()
7 107 STDEV()
8 108 STDEVP()
9 109 SUM()
10 110 VAR()
11 111 VARP()

At this point, you might be saying, Wait a minute! The value 9 is supposed to evaluate hidden values. Shouldn’t the correct argument be 109? It’s a valid question and I have an explanation, I just don’t think it’s a great explanation: SUBTOTAL() ignores rows that aren’t included in the result of a filter, regardless of the argument you specify. It’s a quirk–just one of those little details you need to know about the function. Whether you use 9 or 109, SUBTOTAL() will evaluate only the visible values–it will not evaluate hidden values.

10 ways to keep hard drives from failing

Hardware prices have dropped considerably over the last decade, but it’s irresponsible not to care for the hardware installed on machines. This is especially true for hard drives. Hard drives are precious commodities that hold the data employees use to do their jobs, so they should be given the best of care. Inevitably, those drives will die. But you can take steps to prevent a premature hard disk death. Let’s examine 10 such steps to care for the health of your drives.

1: Run chkdsk
Hard disks are eventually going to contain errors. These errors can come in the shape of physical problems, software issues, partition table issues, and more. The Windows chkdsk program will attempt to handle any problems, such as bad sectors, lost clusters, cross-linked files, and/or directory errors. These errors can quickly lead to an unbootable drive, which will lead to downtime for the end user. The best way I have found to take advantage of chkdsk is to have it run at next boot with the command chkdsk X: /f where X is the drive you want to check. This command will inform you the disk is locked and will ask you if you want to run chkdsk the next time the system restarts. Select Y to allow this action.

2: Add a monitor
Plenty of applications out there will monitor the health of your drives. These monitors offer a host of features that run the gamut. In my opinion, one of the best choices is the Acronis Drive Monitor, a free tool that will monitor everything from hard drive temperature to percentage of free space (and everything in between). ADM can be set up to send out email alerts if something is amiss on the drive being monitored. Getting these alerts is a simple way to remain proactive in the fight against drive failure.

3: Separate OS install from user data
With the Linux operating system, I almost always separate the user’s home directories (~/) from the OS installation onto different drives. Doing this ensures the drive the OS is installed upon will enjoy less reading/writing because so much of the I/O will happen on the user’s home drive. Doing this will easily extend the life of the drive the OS is installed on, as well as allow you to transfer the user data easily should an OS drive fail.

4: Be careful about the surrounding environment
Although this seems like it should go without saying, it often doesn’t. On a daily basis, I see PCs stuck in tiny cabinets with zero circulation. Obviously, those machines always run hot, thus shortening the lifespan of the internal components. Instead of shoving those machines into tight, unventilated spaces, give them plenty of breathing room. If you must cram a machine into a tight space, at least give it ventilation and even add a fan to pull out that stale, warm air generated by the PC. There’s a reason why so much time and money have gone into PC cooling and why we have things like liquid cooling and powerful cooling systems for data centers.

5: Watch out for static
Here’s another issue that should go without saying. Static electricity is the enemy of computer components. When you handle them, make sure you ground yourself first. This is especially true in the winter months or in areas of drier air. If you seem to get shocked every time you touch something, that’s a good sign that you must use extra caution when handling those drives. This also goes for where you set those drives down. I have actually witnessed users placing drives on stereo speakers, TVs, and other appliances/devices that can give off an electromagnetic wave. Granted, most of these appliances have magnets that are not strong enough to erase a drive. But it’s a chance no one should take.

6: Defragment that drive
A fragmented drive is a drive being pushed to work harder than it should. All hard drives should be used in their most efficient states to avoid excess wear and tear. This includes defragmenting. To be on the safe side, set your PC(s) to automatically defrag on a weekly basis. This works to extend the life of your drive by keeping the file structure more compact, so the read heads are not moving as much or as often.

7: Go with a solid state drive
Solid state drives are, for all intents and purposes, just large flash drives, so they have no moving parts. Without moving parts, the life of the drive (as a whole) is naturally going to be longer than it would if the drive included read heads, platters, and bearings. Although these drives will cost more up front, they will save you money in the long run by offering a longer lifespan. That means less likelihood of drive failure, which will cause downtime as data is recovered and transferred.

8: Take advantage of power save
On nearly every OS, you can configure your hard drive to spin down after a given time. In some older iterations of operating systems, drives would spin 24/7–which would drastically reduce the lifespan of a drive. By default, Windows 7 uses the Balanced Power Savings plan, which will turn off the hard drive after 20 minutes of inactivity. Even if you change that by a few minutes, you are adding life to your hard drive. Just make sure you don’t shrink that number to the point where your drive is going to sleep frequently throughout the day. If you are prone to take five- to 10-minute breaks often, consider lowering that time to no less than 15 minutes. When the drive goes to sleep, the drive is not spinning. When the drive is not spinning, entropy is not working on that drive as quickly.

9: Tighten those screws
Loose mounting screws (which secure the hard drive to the PC chassis) can cause excessive vibrations. Those vibrations can damage to the platters of a standard hard disk. If you hear vibrations coming from within your PC, open it and make sure the screws securing the drive to the mounting platform are tight. If they aren’t, tighten them. Keeping your hardware nice and tight will help extend the life of that hardware.

10: Back up
Eventually, that drive will fail. No matter how careful you are, no matter how many steps you take to prevent failure, the drive will, in the end, die a painful death. If you have solid backups, at least the transition from one drive to another will be painless. And by using a backup solution such as Acronis Universal Restore, you can transfer a machine image from one piece of hardware to another piece of hardware with very little issue.

Jack Wallen was a key player in the introduction of Linux to the original TechRepublic. Beginning with Red Hat 4.2 and a mighty soap box, Jack had found his escape from Windows. It was around Red Hat 6.0 that Jack landed in the hallowed halls of TechRepublic.

Five tips for finding a cloud solution that’s ready for your users

Cloud computing is here to stay. It has quickly earned a reputation as a powerful business enabler, based on benefits such as scalability, availability, on-demand access, rapid deployment, and low cost. IT-savvy users in development and test functions have adopted the cloud model to accelerate application lifecycles. And with recent innovations in self-service access, users in consulting, training, and sales demo areas are also becoming the direct consumers of cloud services.

As these mainstream users adopt cloud services, many companies find “infrastructure-oriented” cloud services to be intimidating and difficult to use, since they were designed for IT pros. To be of value to functional users, a business cloud solution must be simple and self-service oriented, much like iTunes. This is especially important because many companies do not have sufficient IT resources to help set up, code, and customize cloud services.

A business cloud solution should be usable–not just codeable–from day one. Here are some steps you can take to determine whether a cloud solution is usable for your business.

1: Verify that the cloud directly addresses your business problem
What business problem are you trying to solve with the cloud? Having this type of focus can help you avoid the technology trap. If you’re evaluating a cloud solution for multiple functional users, including support, training, or business analysts, be sure that the cloud solution addresses their needs. A cloud that offers pure infrastructure will make it hard for functional users to accomplish business tasks without a UI framework to guide the workflow. If you are moving to the cloud to enable better collaboration across the team, ensure the cloud service provider offers a granular user access model that enables teams to assign rights to users based on their roles.

2: Focus on usability
Today’s enterprise business users need a simple self-service cloud solution that enables them to implement new ideas and collaborate with customers. Usability includes requirements such as configurability, self-service access, collaboration, visibility, and control. Ask yourself these questions:

  • Can the cloud be easily configured for different use-cases?
  • Does it deliver team management capabilities to enforce policies and role-based access?
  • Can your employees collaborate with prospects, customers, and partners and work on parallel streams without being constrained?
  • Does the cloud provide detailed usage reports and control mechanisms?

These are key requirements to enable business agility no matter the size of your organization or the technical maturity of your team. These capabilities will be applauded by your business users, as they don’t have time to build new IT skill sets and sit through hours of cloud training.

3: Determine whether the cloud runs existing applications without any rewrites
Most users are already familiar with the business and technical applications they use today, whether it’s email, training, or sales demo applications. Clouds that power these applications without any changes will deliver immediate value across your organization. Over the years, we have learned firsthand that business users won’t wait for IT to build or rewrite applications for use in the cloud. Time is money, and neither the business user nor the IT department has any to waste. As a result, the ability to run existing applications without any changes is a key factor in determining whether a cloud is easy to use.

4: Assess whether the cloud aligns cloud operating costs with business value
Cloud services do not require an upfront capital investment, but a usage-based pricing model can lead to sticker shock. To ensure that your cloud costs are in alignment with business value, see whether the cloud provider offers a service that measures the value you receive on a per-user basis. You can also ask whether the cloud provider offers distinct pricing for users at different levels. This can help you avoid paying the same fee for light and heavy users within your organization. Find out whether the cloud allows you to apply quotas to individuals and business units to cap usage at soft or hard budget limits. You will also want to ask whether you can automatically suspend resources when they are not in use to avoid the overuse of the cloud and resulting costs.

5: Pay special attention to responsive support
Successfully adopting new technologies, such as cloud computing, often requires a responsive support organization that can attend to your needs. Find out whether you can call a cloud provider directly or whether you must work through an online form or email inquiry to communicate about a cloud service. Also ensure that the support team will respond to your inquiries within a few hours versus a day or more.

The payoff
By following these steps to determine the cloud solution that’s right for your users, your organization will be well equipped to drive business agility, reduce costs, and accelerate your key business activities.

Sundar Raghavan is chief product and marketing officer at Skytap, a provider of cloud automation solutions. He is an industry veteran with an 18-year career in product and marketing roles at Google, Ariba, Arbor Software (now part of Oracle), and Microstrategy.

Enable a distribution list’s moderation features in Exchange 2010

Exchange 2010 includes a feature that has been needed since Exchange started supporting distribution groups: moderated distribution groups. With distribution groups, users who have the rights to send messages to the list can do so with unfettered access. If you’re able to send to the list, you can send anything as often as you like. This may create a situation in which users get too much mail that, in many cases, won’t pertain to them. It increases server load, and even worse, it’s an inefficient way to do business. I speak from experience.

We have three common distribution groups at Westminster College–we have one for students, one for faculty, and one for staff–with way too many people allowed to send way too much mail. One of the reasons that our college is moving to Exchange 2010 is that we’re planning to make significant use of moderated distribution groups in Exchange 2010. We’ll couple the implementation with some other policy- and technology-based mechanisms to better target messages at groups to which the messages pertain and get rid of our current scattershot approach to messaging.

The group creation process
The creation of a moderated distribution list starts out like any other list; in fact, you don’t actually create a moderated distribution list–you enable moderation features on an existing distribution group. You probably already know how to create a distribution group in Exchange, but if you don’t, here’s a quick run through: From the Exchange Management Console, go to Recipient Configuration | Distribution Group; from the Actions pane, click New Distribution Group and follow the wizard’s instructions.

Group moderation features
Once a group is created, you can enable moderation features by opening the group properties (right-click the group and choose Properties). On the Properties page, go to the Mail Flow Settings tab (Figure A). In this dialog box, select the Message Moderation option and then click the Properties button to open the Message Moderation window (Figure B).

Figure A

The Mail Flow Settings tab for the distribution group

Figure B

The Message Moderation window

In the Message Moderation window, select the checkbox next to Messages Sent To This Group Have To Be Approved By A Moderator; this enables the list’s moderation features. Next, choose which users will be designated as moderators for the group.  If you don’t choose anyone, the group owner — that is, the person who created the group — will be responsible for message moderation.

Look at Figure C, and you’ll see that I attempted to send a message to the new list, and the moderator has been notified. It’s up to the moderator to decide whether to approve or reject the message. Clicking the Approve button simply allows the message to be sent. Clicking Reject brings you to a question: Simply Reject The Message, or Reject With Additional Comment. In Figure D, you see the message that is sent to the sender when a message is rejected.

Figure C

Approve or reject the message

Figure D

The message was rejected.

Important note
Before you make heavy use of moderated distribution lists, you should make sure that the message doesn’t have to pass through any non-Exchange 2010 Hub Transport servers. Older Hub Transport servers will simply pass messages on to group members and ignore moderation options.

Scott Lowe has spent 15 years in the IT world and is currently the Vice President and Chief Information Officer for Westminster College in Fulton, Missouri. He is also a regular contributor to TechRepublic and has authored one book, Home Networking: The Missing Manual (O’Reilly), and coauthored the Microsoft Exchange Server 2007 Administrator’s Companion (MS Press).

Smartphone enterprise security risks and best practices

If your organization allows users to connect their smartphones to the company network, you need to consider the following potential security risks and then develop policies for addressing those issues. I also list 10 security best practices for your company’s smartphone policies.

Potential smartphone security risks:
Lack of security software

Smartphones can be infected by malware delivered across the Internet connection, or from an infected PC when the phone is connected to the PC over USB to sync data. It’s even possible to infect the phone via a Bluetooth connection. It’s a good idea to require that those users who connect their smartphones to your network install security software on the devices.

Mobile security software is available for all of the major smartphone platforms. Some of the most popular mobile security suites include Kaspersky Mobile Security, Trend Micro Mobile Security, F-Secure Mobile Security, and Norton’s mobile security products.

Security bypass
Some phones make it easy to bypass security mechanisms for the convenience of the user. This makes it a lot easier and less frustrating for those who are trying to set up their phones to connect, but it also defeats the purpose of those security measures.

For example, I was able to easily set up an Android phone (Fascinate) with an Exchange Server account despite the fact that it notified me that there was a problem with the certificate. It simply asked me if I wanted to accept all SSL certificates and set it up anyway. I clicked Yes and was connected to my mail. On a Windows Phone 7 device, that same message gave me no option for bypassing the certificate problem. I had to import the certificate to the device and install it before I could access the mail. This was obviously more trouble, but also more secure.

Web security
Web browsers on smartphones have gotten a lot better and are actually usable. However, the web is a major source of malicious code, and with a small screen, it’s more difficult for users to detect that a site is a phishing site. The malware can then be transferred onto the network from the phone. To protect the network, you should use a corporate firewall that does deep packet inspection of the smartphone traffic.

The Wi-Fi threat
Most modern smartphones utilize the wireless carrier’s 3G or 4G network, as well as connect to Wi-Fi networks. If users connect their phones to an unsecured Wi-Fi network, they become vulnerable to attack. If company information (such as a network password) is stored on the phone, this creates a real security issue. If the user connects back to the corporate network over a public Wi-Fi network, it could put the entire company network at risk. Users should be required to connect to the company network via an SSL VPN, so that the data traveling between the phone and the company network will be encrypted in transit and can’t be read if it’s intercepted.

Data confidentiality
If users store business-related information on their smartphones, they should be required to encrypt the data in storage, both data that is stored on the phone’s internal storage and on flash memory cards. Interestingly, a recent article in Cellular News notes that a Goode Intelligence survey found that 64 percent of users don’t encrypt the confidential data stored on their smartphones. This is despite the fact that another survey by Juniper Networks found that more than 76 percent of users access sensitive information with their mobile devices.

In the past, this could be justified by the amount of processing power required to encrypt data and the slow processors on the phones. Today’s phones, however, boast much more powerful hardware; the Motorola Droid 2 Global, for example, has a 1.2 GHz processor.

You also need to consider cached data in smartphone applications that are always running. Some applications display updates on the screen that could contain confidential data, as well. This is another reason to password-protect the phone. Smartphones should be capable of being remotely wiped if lost or stolen.

Physical security
Because of their highly portable nature, smartphones are particularly prone to loss or theft, resulting in unauthorized persons gaining physical access to the devices. In addition, some people may share their phones with family members or loan them to friends from time to time. If those phones are set up with corporate email or VPN software configured to connect to the corporate network, for example, this is a security problem.

A basic measure is to require that users safeguard their devices by enabling PIN or password protection to get into the operating system when you turn the phone on or to unlock it. Most smartphones include this feature but most users don’t enable it because it takes a little more time to enter the PIN/password each time. This will protect from access by a casual user who finds the phone or picks it up when the owner leaves it unattended. However, those features can often be defeated by a knowledgeable person.

Android 2.0.1 had a bug that made it easy to get to the homescreen without entering the PIN by simply hitting the Back button when a call came in on the locked Droid. The iPhone had a similar issue in versions 2.0.1 and 2.0.2, which let you get around the security by hitting Emergency Call and double clicking the Home button.

In the future, PINs and passwords may be replaced by biometric or facial recognition systems.

Security best practices for smartphone policies
Smartphone security in the business environment requires a two-pronged approach: protect the phones from being compromised and protect the company network from being compromised by the compromised phones. Here are some security best practices that you can incorporate into your smartphone policies.

  1. Require users to enable PIN/password protection on their phones.
  2. Require users to use the strongest PINs/passwords on their phones.
  3. Require users to encrypt data stored on their phones.
  4. Require users to install mobile security software on their phones to protect against viruses and malware.
  5. Educate users to turn off the applications that aren’t needed. This will not only reduce the attack surface, it will also increase battery life.
  6. Have users turn off Bluetooth, Wi-Fi, and GPS when not specifically in use.
  7. Have users connect to the corporate network through an SSL VPN.
  8. Consider deploying smartphone security, monitoring, and management software such as that offered by Juniper Networks for Windows Mobile, Symbian, iPhone, Android, and BlackBerry.
  9. Some smartphones can be configured to use your rights management system to prevent unauthorized persons from viewing data or to prevent authorized users from copying or forwarding it.
  10. Carefully consider a risk/benefits analysis when making the decision to allow employee-owned smartphones to connect to the corporate network.

Debra Littlejohn Shinder, MCSE, MVP is a technology consultant, trainer, and writer who has authored a number of books on computer operating systems, networking, and security. Deb is a tech editor, developmental editor, and contributor to over 20 additional books on subjects such as the Windows 2000 and Windows 2003 MCSE exams, CompTIA Security+ exam, and TruSecure’s ICSA certification. She has authored training and marketing material, corporate whitepapers, training courseware, and product documentation for Microsoft Corp. and other technology companies. Deb currently specializes in security issues and Microsoft products.

6 ways to get free information about Word

Microsoft Word


6 ways to get free information about Word

Hiring someone to train your troops to use Word is a great idea, but there won’t always be a trainer nearby. Fortunately, there are a number of ways users can get help, for free.  Chances are it might take a bit of research, but you can usually find the help you need, with a bit of perseverance.

1. [F1]
The first line of defense is [F1]. Press [F1], enter a few descriptive words, such as “change style”, or “delete header”. Word will display a list of help topics, based on your input. Sometimes this works great and sometimes the results are inconsistent. However, it’s the best place to start, because sometimes the answer pops right up!

[F1] is available in all Office applications. You must install Help for these files to be available.

2. Microsoft Answers
Microsoft Answers is a free support site (forum). If you want to search available posts, enter a question in the Find Answer control. If you don’t find what you need, click Ask A Question (at the bottom of the page). You have to sign in using your Windows Live ID. If you don’t have one, there’s a link for that too.

Microsoft Answers supports Office, not just Word.

3. Word MVPs
MVP’s are volunteers who share their expertise, worldwide and for free. Microsoft honors those who stand out with the MVP title. MVPs really know their stuff and there are two ways to benefit from their expertise and generosity. First, visit The Word MVP Site. There’s a lot of information readily available. If you don’t find an answer, click Contact, read the instructions and submit your question. There’s no guarantee anyone will respond, but it can’t hurt.  However, try to find the answer yourself first. You’re probably not going to get a response to a question that’s answered by an existing Help file. By all means, please be polite. These folks provide this service for free.

In addition, MVP Web Sites lists current MVP’s with links to their sites. You can’t submit questions, but you will find valuable information.

4. Microsoft Knowledge Base
A long-time favorite support site is Microsoft’s Knowledge Base. This is a huge database of articles that offers how-to instructions, workarounds for bugs, and so on. The articles are a bit dry and sometimes, difficult to follow, but you’ll usually find something you can use. There’s even an article on how to use the Knowledge Base!

5. Microsoft Word Help and How-to
Word Help and How-to is another site supported by Microsoft. Use keywords to search the available files. You won’t get personalized answers, but you might find just what you need.

6. Listservs
My favorite resource is a listserv; I’m a member of many. If you’re not familiar with the term, a listserv is an email server (group). You send messages and other members respond, all via email. Yahoo! Groups is a good place to start, but there are private listservs as well. Search on “Microsoft Word” and see what’s available.

It might take a while to find just the right group. In addition, they’re a bit like potato chips. Joining one inevitably leads to joining more–you’ve been warned!

Microsoft Excel


Keep users from selecting locked cells in Excel

Most of us create custom workbooks that others update. You probably protect the sheets and unlock only the input cells. That way, users can’t accidentally delete or change formulas and other flag values. The worst they can do is enter invalid values.

Unlocking input cells and protecting sheets is a simple enough process, but a truly knowledgeable user can get around it. For those users, there’s a simple macro for resetting things. First, let’s unlock input cells in the simple sheet shown below.

In this particular sheet, users only need to update two cells: B1 and B2. You’ll want to unlock your input cells, as follows, before you protect the sheet:

  1. Select the input cells. In this case, that’s B1:B2.
  2. Right-click the selection and choose Format Cells from the resulting context menu.
  3. Click the Protection tab.
  4. Uncheck the Locked option.
  5. Click OK.

The next step is to protect the sheet as follows:

  1. From the Tools menu, choose Protection, and then select Protect Sheet. In Excel 2007 and 2010, click the Review tab | Protect Sheet (in the Changes group).
  2. Enter a password.
  3. Uncheck the Select Unlocked Cells option.
  4. Click OK.
  5. Enter the password a second time to confirm it.
  6. Click OK.

At this point, you can select and change the contents of cells B1 and B2. You can’t select any other cells but B1 and B2.

As I mentioned, it won’t always matter if a user can select locked cells. On the other hand, the setup I’m suggesting creates an easy-to-follow data entry map. There’s no confusion for the user–the only updateable cells are those the user can select.

This much you might already know. What’s a bit scary is that a user can quickly undo the selection property as follows:

  1. From the View menu, choose Toolbars.
  2. Select Control Toolbox.
  3. Click the Properties tool.
  4. In the properties window, change the EnableSelection property to 0-xlNoRestriction.
  5. Click OK.

Users can also access this property via the VBE. In Excel 2007 and 2010, the user can display the Developer tab (via the File | Customize Ribbon route) and click Properties in the Controls group.

After resetting the EnableSelection property to 0, users can select any cell in the sheet, but they still can’t alter cell contents, except for the cells you unlocked before protecting the sheet. This doesn’t seem all that important, unless your users don’t know what they’re supposed to do. In this simple sheet, the input cells are clear, but a complex sheet with non-contiguous input ranges will certainly be more confusing.

To reclaim the original settings, include two macros: One that resets the property when the workbook is opened and a second that resets the property when the selection in the sheet changes. Open the Visual Basic Editor and double-click ThisWorkbook in the Project Window. Then, enter the following macro:

Private Sub Workbook_Open()
  'Disable locked cells in IndirectEx sheet.
  Worksheets("IndirectEx").EnableSelection = xlUnlockedCells
End Sub

That macro will reset the property when the workbook is opened. That way, users always start with the right setting. To add the macro that acts on a selection change in the actual sheet, double-click the sheet (by name) in the VBE Project window and enter this macro:

Private Sub Worksheet_SelectionChange(ByVal Target As Range)
  'Reset if user manages to disable enable selection property.
  Worksheets("IndirectEx").EnableSelection = xlUnlockedCells
End Sub

The only difference is the event that executes each macro. The SelectionChange event fires when a user changes the cell selection (only in the specified sheet, not throughout the entire workbook). Users won’t notice it at all unless they manage to disable the EnableSelection property (as described earlier). Then, the user will be able to select a locked cell. Doing so will execute the macro, which will reset the property.  The user will be able to select only one locked cell before the macro resets the property.

The truth is the user that’s smart enough to get around your locked cells might know how to circumvent your macros– but they’re worth a try.

Microsoft PowerPoint


Quick keyboard shortcuts for the Access Navigation Pane

PowerPoint provides a number of pre-defined backgrounds but you might want to use an image of your own. Fortunately, PowerPoint is accommodating; it’s easy to repeat a custom image across a slide’s background. For instance, the following image is a .png file created in Paint. PowerPoint will have no problem working with it. This file is relatively small at 182 by 282 pixels and 2881 bytes. Work with the smallest files possible.

Once you have an image file, you’re ready to insert it, as follows:

  1. Right-click a slide’s background and choose Format Background.
  2. Click the Picture Or Texture Fill option.
  3. Click the File button (under Insert From).
  4. Use the Insert Picture dialog to locate the file. Select the file and click Insert.
  5. Click the Tile Picture As Texture option.
  6. Click Close to apply to the current slide. Click Apply to All and then click Close to apply to all the slides in the presentation.

It takes a few more clicks in PowerPoint 2003:

  1. Right-click a slide’s background and choose Background.
  2. Click the dropdown under Background Fill and choose Fill Effects.
  3. Click the Texture tab and then click the Other Texture button.
  4. Use the Select Texture dialog to locate the file. Select the file and click Insert.
  5. Click OK and then click one of the Apply or Apply To All.

To save the background as a separate file, right-click the background and choose Save Background.

The image in this example is to busy to actually use as a background. The busy-ness of this example simply shows how easy it is to work with an abstract pattern. Insert the file as a texture and PowerPoint does all the rest. It couldn’t be simpler!

Update VMware Tools from PowerCLI

For vSphere installations, the VMware Tools drivers allow virtual machines to connect to the ESX or ESXi hypervisor for optimal performance, as well as take advantage of all of the current virtual devices. Each incremental update of VMware ESXi or ESX may incur an update for the VMware Tools installation for the guest virtual machines. Keeping VMware Tools up to date can be a task that gets away from you quickly.

One fast and repeatable way to update VMware Tools is to use PowerCLI, VMware’s PowerShell implementation. A number of commands (Cmdlets) are available to make quick work of this task; for instance, the Update-Tools Cmdlet in PowerCLI allows a guest to receive an update to VMware Tools.

To utilize this Cmdlet, we’ll take an example of the DROBO-WS2K8R2-SQL2K8 virtual machine with an out of date VMware Tools installation (Figure A).

Figure A

Click the image to enlarge.

The following PowerCLI string will update VMware Tools on the virtual machine in question:

Update-Tools -NoReboot -VM DROBO-WS2K8R2-SQL2K8 -Server VC4.RWVDEV.INTRA

In this example, the VM is specified, as well as the vCenter Server (VC4.RWVDEV.INTRA). When the command is processed, it is displayed in the vSphere Client (Figure B).

Figure B

Click the image to enlarge.

Note that the -NoReboot option was specified during this iteration of the Cmdlet and is new to the PowerCLI implementation that came with vSphere 4.1. While it will not reboot the virtual machine, there will be an impact to the Windows guest operating system. A VMware Tools upgrade will in most situations update the driver for the network interface within the virtual machine; this will cause a momentary loss of network connectivity of the guest virtual machine that is self-recoverable yet noticeable. Keep this in mind when using the script.

If you need to update multiple virtual machines, several options can be selected. The most easy to execute would be the wildcard in the -VM value. A line for each virtual machine could also be done to deliver explicit implementations.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

Secret to succeeding with social media apps in your organization

There isn’t a serious IT leader on the planet who isn’t interested in figuring out how to capture the power and benefits of social media applications for their enterprise. Now before you get the wrong idea, let me be clear: I’m not talking about mining social media sites like Facebook or trying to open your enterprise to social media applications.

What I am referring to is the capturing essence of Facebook and other social media hubs, i.e., creating a collaborative, user-driven environment that connects people to a common purpose. In this context, we’re connecting people who work for the same company in order to work better, faster, and easier. And in so doing, we’re streamlining and promoting communication, information distribution, collaboration, and community building in much the same way that Facebook does­ by moving people on to a central platform for messaging and information sharing.

Now, it’s not that there aren’t any applications for doing this. There are plenty of them. Most of them fall under the heading of collaboration platforms and provide tools for building communities, authoring and sharing content, managing projects, and collaborating in truly visionary ways. The problem is that full-scale adoption of this collaborative approach has hardly caught on.

For many, especially the over-35 crowd, using these systems falls on par with the joy of filling out a timesheet–just another cumbersome task that has to be done; another process getting in the way of real work. And the result, no surprise, is that most knowledge workers (as we are now known) take any opportunity to work around these systems and avoid these applications altogether. In the absence of a powerful mandate, these applications languish on the sidelines or receive marginal use at best.

Personally, I’m a huge supporter and user of this new generation of collaborative software. I have been very close to a number of implementations (including our own in-house transformation), and I have experienced firsthand just how powerful they can be. More importantly, I believe that I have discovered the secret to success with this type of change. Are you ready? It’s gonna shock you at first, so stay with me.

The secret to success
OK, here it is: Disable e-mail attachments. That’s right, stop allowing people in your company to send an attachment along with their e-mails to anyone inside your company. (You’ll have to leave the ability for communicating with outsiders, of course.) If you have the influence (or guts) to pull it off, I promise it will drive adoption of your collaboration application so quickly you won’t believe it. Here’s why:

At the heart of all true collaboration applications is the basic understanding that we work together on ideas and these ideas are born, take shape, and live in documents. From the earliest stages of idea generation (whether as Word, PowerPoint, Excel, Flowchart, MindMap, whatever), collaboration apps encourage users to get material off their local hard drives and into a platform where they belong to all. In short, collaboration applications represent the critical path to true group thinking and working.

But, and this is a big but, in order for these applications to work, they have to be used regularly and properly. Documents have to reside on the platform. And that’s exactly where the problem lies. Most people are not accustomed to working this way. They can’t be bothered to get content onto a collaboration platform. They believe they have a quick and easy way for collaborating without the overhead–it’s called e-mail. And human nature ain’t on your side when it comes to beating this one.

I could go on and on about all the wonderful benefits available to users and companies that embrace collaboration platform–commenting, notification, version control, search, and so much more–but the prevalent truth is that wide-scale adoption is still the exception, not the rule. (God knows the vendors are working it day in and day out.) As in many other cases, adoption of collaboration platforms lags because tomorrow’s potential benefits don’t seem to offer enough to pull users away from today’s quick-and-dirty process.

Case study–a law firm takes the plunge
I have seen extremely smart lawyers suffer document-version screwups multiple times at a cost of hundreds of hours of rework (that means tens of thousands of dollars unbilled) and still avoid using the firm’s collaboration platform.

All that changed for one firm when a senior partner, fed up with the situation and associated costs, politely refused to read anything e-mailed to him as an attachment. To boot, he didn’t e-mail attachments either.  If his colleagues wanted to collaborate and work with him–and since he was the senior partner, they certainly did–there was no other choice but to use the collaboration platform. His position: If the document was worth his time, it was worth a two-minute investment for the “sender” to work though the platform.

Sure enough, within 60 days the firm was transformed. Everything, and I mean everything, moved onto the collaboration platform. And then, the magic started to happen. Document comments started flying around; stringing one-off thoughts into actual discussions. Version control worries became a thing of the past.  New ideas began popping up in the company wiki, and a simple, but effective, task management process came to life on its own. Here’s the best part: No one, and I mean no one, ever sent another “Could you send me that file?” or “Is this the latest version” e-mail. All this happened because the central building blocks of the platform, the intellectual property of the firm, was on the platform and not being passed around via e-mail.

Today if you ask anyone at the firm about the platform, they would say that they couldn’t work without it and that going back to e-mail-centric collaboration would be a painful setback to their productivity. Success! And the best part of it all: Internal e-mail went back to being used for what it was originally intended–brief, quick, one-to-one messages. Anything more substantial goes on to the collaboration platform from the start.

The takeaway
I know it sounds a bit extreme and you may not be able to pull it off completely in your organization. Nonetheless, you may be able to apply the lesson in a more limited way. Perhaps take baby steps–a day or a week without attachments–as a pilot. One thing is for certain: if you’re successful in getting people over to the other side, once they cross over, it doesn’t take long at all for them to stop wishing there was a way back.

Marc J. Schiller is a leading IT thinker, speaker, and author of the upcoming book The Eleven Secrets of Highly Influential IT Leaders. Over the last 20 years he has helped IT leaders and their teams dramatically increase their influence in their organization and reap the associated personal and professional rewards.

Use Sysinternals Active Directory Explorer to make a domain snapshot

Active Directory is one of Microsoft’s best products ever in my opinion. It allows for an incredible amount of control of computer and user accounts, and there is so much more under the hood.

The free Sysinternals Active Directory Explorer tool allows administrators to quickly look at information for the entire domain, as well as take a snapshot for comparison at a later date. The tool should not replace any of the Active Directory tools for everyday use, but rather supplement them for snapshots or a view into specific configuration.

Once Active Directory Explorer is installed, the basic authentication screen appears to connect to a database (Figure A).

Figure A

Click the image to enlarge.

It’s not ideal, but you can create objects, such as a user account, within the Active Directory Explorer tool (Figure B).

Figure B

Click the image to enlarge.

Creating a snapshot of the Active Directory domain (Figure C) will export the entire directory as a .DAT file on local disk.

Figure C

Click the image to enlarge.

You can then apply the snapshot as a comparison to the live configuration of the domain; this is a great way to see what has changed. This can also be a much more comfortable alternative to investigate what has changed rather than seeking out a wholesale of the domain or even selected objects, which can be very impactful to the state of user and computer accounts. Figure D shows a comparison of the snapshot to a live domain being prepared.

Figure D

Click the image to enlarge.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

5+ tips to ensure PCI DSS compliance

On occasion, I help a friend who owns several businesses. His latest venture is required to comply with the Payment Card Industry Data Security Standard (PCI DSS). My friend is computer savvy. So between the two of us, I assumed the network was up to snuff. Then went through a compliance audit.

The audit was eye opening. We embarked on a crash course in PCI DSS compliance with the help of a consultant. My friend thought the consultant could help prepare for the mandatory adoption of PCI DSS 2.0 by January 1, 2011.

The PCI Security Standards Council defines PCI DSS this way: “The goal of the PCI Data Security Standard is to protect cardholder data that is processed, stored, or transmitted by merchants. The security controls and processes required by PCI DSS are vital for protecting cardholder account data, including the PAN–the primary account number printed on the front of a payment card.”

The consultant’s first step was to get familiar with the network. He eventually proclaimed it to be in decent shape, security-wise. Yet the look on his face told us there was more. Sure enough, he went on to explain that more attention must be paid to protecting cardholder data.

Back to school
The consultant pointed out that PCI DSS consists of 12 requirements. These requirements are organized into six guides. Although the requirements are for PCI DSS compliance, I dare say the guides are a good primer for any business network, regardless of whether PCI DSS is a factor. With that in mind, I’ve used the guides as the basis for these tips.

1: Build and maintain a secure network
Guide 1 states the obvious, and books have been written on how to secure a network. Thankfully, our consultant gave us some focus by mentioning that PCI DSS places a great deal of emphasis on the following:

  • Well-maintained firewalls are required, specifically to protect cardholder data.
  • Any and all default security settings must be changed, specially usernames and passwords.

Our consultant then asked whether my friend had offsite workers who connected to the business’s network. I immediately knew where he was going. PCI DSS applies to them as well–something we had not considered but needed to.

2: Protect cardholder data
Cardholder data refers to any information that is available on the payment card. PCI DSS recommends that no data be stored unless absolutely necessary. The slide in Figure A (courtesy of PCI Security Standards Council) provides guidelines for cardholder-data retention.


Figure A

One thing the consultant stressed: After a business transaction has been completed, any data gleaned from the magnetic strip must be deleted.

PCI DSS also stresses that cardholder data sent over open or public networks needs to be encrypted. The minimum required encryption is SSL/TLS or IPSEC. Something else to remember: WEP has been disallowed since July 2010. I mention this as some hardware, like legacy PoS scanners, can use only WEP. If that is your situation, move the scanners to a network segment that is not carrying sensitive traffic.

3: Maintain a vulnerability management program
It’s not obvious, but this PCI DSS guide subtly suggests that all computers have antivirus software and a traceable update procedure. The consultant advised making sure the antivirus application has audit logging and that it is turned on.

PCI DSS mandates that all system components and software have the latest vendor patches installed within 30 days of their release. It also requires the company to have a service or software application that will alert the appropriate people when new security vulnerabilities are found.

4: Implement strong access control measures
PCI DSS breaks access control into three distinct criteria: digital access, physical access, and identification of each user:

  • Digital access: Only employees whose work requires it are allowed access to systems containing cardholder data.
  • Physical access: Procedures should be developed to prevent any possibility of unauthorized people obtaining cardholder data.
  • Unique ID: All users will be required to have an identifiable user name. Strong password practices should be used, preferably two-factor.

5: Regularly monitor and test networks
The guide requires logging all events related to cardholder data. This is where unique ID comes into play. The log entry should consist of the following:

  • User ID
  • Type of event, date, and time
  • Computer and identity of the accessed data

The consultant passed along some advice about the second requirement. When it comes to checking the network for vulnerabilities, perform pen tests and scan the network for rogue devices, such as unauthorized Wi-Fi equipment. It is well worth the money to have an independent source do the work. Doing so removes any bias from company personnel.

6: Maintain an information security policy
The auditor stressed that this guide is essential. With a policy in place, all employees know what’s expected of them when it comes to protecting cardholder data. The consultant agreed with the auditor and added the following specifics:

  • Create an incident response plan, since figuring out what to do after the fact is wrong in so many ways.
  • If cardholder data is shared with contractors and other businesses, require third parties to agree to the information security policy.
  • Make sure the policy reflects how to take care of end-of-life equipment, specifically hard drives.

Final thoughts
There is a wealth of information on the PCI Security Standards Council’s Web site. But if you are new to PCI DSS, or the least bit concerned about upgrading to 2.0, I would recommend working with a consultant.

5 tips to help prevent networking configuration oversights

I don’t know about you, but I find myself forgetting the same things over and over, a case of deja vu and amnesia at the same time: “I think I forgot this before!” When it comes to networking configuration, small errors happen most frequently. Here are some of the networking configuration errors I often encounter, along with what I’m doing to reduce the chances of their happening again.

1: Subnets other than 24-bit
How many subnets do you have that are something other than a 24-bit netmask (255.255.255.0)? I don’t work with many subnets other than the standard class C network, but every time I do, I have to double-check myself to make sure the correct subnet mask is applied. I’m trying to find reasons to use subnets other than the venerable 24-bit mask, but the reasoning becomes uncertain in most internal IP address spaces with non-routable IP addresses.

2: DNS suffix lists
Having a complicated list of DNS suffixes and missing one or more of the entries can make name resolution less than pleasant. The good news is that we can fix this via Windows Group Policy to set a primary suffix and suffix search-order for each computer account.

3: Default gateway other than .1
Each time a static IP address is configured on a network that has a default gateway other than .1, I get a little confused and have to double-check the configuration. For subnets smaller than 255 hosts (a class C subnet), the chances are higher that the last octet of the IP address space will not permit a .1 default gateway. The fix can be to standardize on class C subnets for internal networks, even if there are wasted IP addresses at the end of the range.

4: DNS IP addresses
If I had my way, every DNS server at every site would have the same IP address structure as every other site. That way, I would have to determine only the first two or three positions of the IP address and the DNS servers would be easy to figure out. I’m game for anything I can do to standardize. For example, if every network has a .1 default gateway, .2 can be the DNS server for that network. That, I can remember.

5: WINS in all its glory
I can ping the server by fully qualified domain name, but I can’t access just the NetBIOS name. A number of things can be wrong, including WINS configuration. A properly configured set of DNS suffixes and search orders can often address this. But one way to avoid the issue is to implement the globalnames zone with Windows Server 2008’s DNS engine.

Easy printer sharing in GNOME

Do you remember how challenging sharing printers could be back when you had to manually configure your smb.conf file to include shared printers? Well, those days are over with the latest incarnations of the GNOME desktop. Like folder sharing, printer sharing has been made very simple and can be done completely within a GUI. Let’s see just how this is done.

Assumptions
I will assume that you already have the printer attached to the local machine and it is printing just fine. I will also assume the machine the printer is attached to is the Linux machine that will share the printer out. If that is all the case, you are ready to begin the sharing process.

How to share out a printer
The first thing to do is to click System | Administration | Printing. When this new window opens, right-click the printer you want to share and select Properties. From the Properties window click the Policies tab (see Figure A) and then make sure the following are checked:

  • Enabled
  • Accepting Jobs
  • Shared

Figure A

Once you have those items checked, click OK.

The next step is to configure the CUPS server settings. To do this go back to the main Printing window and click Server | Settings. In this new window (see Figure B) make sure the following items are checked:

  • Publish shared printers connected to this system.
  • Allow printing from the network.

The rest of the settings are optional.

Figure B

Once you click OK your printer should be ready to use by remote machines. Of course how you connect to this shared printer will be dictated by the operating system you are trying to connect from.

Issues
Obviously there may be issues–depending upon the OS you are using. For example if you are connecting from a Windows 7 operating system, you may need to make a single change to your smb.conf file (yes, there will be a manual edit in this case). The edit in question is this:

  1. Search for the [printers] section.
  2. Change the line browseable = no to browseable = yes.
  3. Restart Samba.

That’s it. Once you make that change you should be able to then see your Printers from Windows machines.

Final thoughts
Sharing out printers used to be a challenge for Linux users. Thanks to modern desktops like GNOME (and a much easier to administer Samba), printer sharing has become far easier than it once was.

Jack Wallen was a key player in the introduction of Linux to the original Techrepublic. Beginning with Red Hat 4.2 and a mighty soap box, Jack had found his escape from Windows. It was around Red Hat 6.0 that Jack landed in the hallowed halls of Techrepublic.


Optimize data access for mobile phone apps

I’ve been experimenting with Windows Phone 7 development, and I have not been 100% happy with the process. (For details, read My first Windows Phone 7 app development project and The Windows Phone 7 App Hub experience.) However, an interesting aspect of my experiment is that the limitations of mobile devices (and Windows Phone 7 specifically) are forcing me to dust off some old-school performance techniques.

The major limitation I encountered with Windows Phone 7 is that it does not support WCF Data Services natively. There is a library to remedy this problem, but unfortunately, it does not support authentication, and many data services will need authentication. You can manually make the request and parse the XML yourself if you really want to, but it is clumsy.

The other issue is that, as the publisher, publishing data via Web services have ongoing costs directly linked to usage rates, but the App Hub publishing model does not allow for subscription fees at this time. If your application is popular, the last thing you need is to be selling an app for 99 cents that costs you 20 cents per user per month to operate.

Another concern with using Web services is that the Windows Phone 7 application guidelines are very strict about delays related to networking; you cannot make these requests synchronously, and you must have provisions for cancellation for “long running” operations. In my experience, an application was rejected because it called a service that never took more than a second or two to return with results, and I needed to provide a Cancel button for that scenario.

Because trying to access Web services is so clumsy right now, and you need to be mindful of the need to support cancellation, an attractive alternative is to put the data locally and work with it there.

Unfortunately, Windows Phone 7 also lacks a local database system. At best, your local data options are XML files and text files.

If do this kind of work on a desktop PC, it’s not a big deal; by default, people will just throw it into an XML file and have a great day. The problem is that XML is a very inefficient file format, particularly on parsing and loading, and mobile devices lack CPU power. Depending on how much data you have, your application could be very unresponsive while loading the data, which will get your application rejected or force you to support cancellation for loading the data. And honestly, what is the point of a data-driven app where the users cancel the data that is being loaded?

So I’ve been digging into my bag of tricks (I’m glad I remember Perl), and here are some ways you can load data with a lot more speed than parsing XML.

  • CSV: CSV is a tried and true data format, and it loads very fast. There are a zillion samples on the Internet of how to work with CSV data.
  • Fixed width records: If you need even more speed than CSV offers, and you are willing to give up some storage space in the process, fixed width records are even faster than CSV. You can find lots of examples online of how to implement a data system using fixed width records.
  • Indexing: You can create a simple indexing system to help locate your records in a jiffy. If your application only needs to read data, this is downright easy. Indexing provides awesome speed boosts with fixed width records since you can read the index location, multiply the row number by the record size to get the byte offset, and move directly there. It can provide an advantage for delimited files too, but usually only if you need to parse the records to find the data without the index. Load the index into RAM for additional benefits.
  • Data file partitioning: Sometimes data can logically be split amongst smaller files, which can help your performance with delimited data files. For example, if you have data that can be grouped by country, put each country’s data into a separate file; this way you reduce the number of reads needed to find data, even if you know what line it is on. Fixed width records with an index usually will not benefit from data partitioning, since they can directly access the data.

Disclosure of Justin’s industry affiliations: Justin James has a contract with Spiceworks to write product buying guides; he has a contract with OpenAmplify, which is owned by Hapax, to write a series of blogs, tutorials, and articles; and he has a contract with OutSystems to write articles, sample code, etc.

Justin James is an employee of Levit & James, Inc. in a multidisciplinary role that combines programming, network management, and systems administration. He has been blogging at TechRepublic since 2005.

How to use Microsoft Excel’s RANK() function

Microsoft Excel


How to use Microsoft Excel’s RANK() function

Excel’s RANK() function returns the rank of a value within the context of a list of values. By rank, I mean a value’s relative position to the other values in the list. You could just sort the list, but that’s not always practical and doing so won’t return a rank, although you can easily see which values rank highest and lowest in a sorted list.

The figure below shows the RANK() function at work in a simple spreadsheet. The function in cells F2:F5 returns the rank of the four values in E2:E5. Those values are the result of the following SUMIF() function:

=SUMIF($A$2:$A$9,$D2,$B$2:$B$9)

The SUMIF() returns a total for each individual listed in column A. (You can recreate this spreadsheet or work with a simple column of values.)

About RANK()
The RANK() function has three arguments:

RANK(number,reference,[order])

where number is the value you’re ranking, reference identifies the list of values you’re comparing number against, and order specifies an ascending or descending rank. If you omit order, Excel assumes the value 0, which ranks values in descending order. Any value other than 0 ranks in ascending order. In this example, I enter the following function into cell F2:

=RANK(E2,$E$2:$E$5)

Notice that number is relative but reference is absolute. You’ll want to maintain that structure when applying this to your own spreadsheet. Copy the function in F2 to F3:F5. The largest value, 120, returns a rank of 1. The lowest value, 98, is 4. To reverse the ranking order, include order as follows:

=RANK(E2,$E$2:$E$5,1)

Understanding a tie
Something you’ll want to watch for is a tie. RANK() will return the same rank for a value that occurs more than once. Interestingly, RANK() accommodates the tie by skipping a rank value. For instance, the following spreadsheet shows what happens when both Alexis and Kate have the same value (101). The rank for both is 2 and there’s no rank of 3. The lowest value still ranks as 4.

There’s no argument to change this behavior. If a tie isn’t valid, you must find a second set of criteria to include in the comparison.

Microsoft Outlook


Display multiple monthly calendars in the Date Navigator

By default, Outlook displays just 1 month in the Date Navigator. By stealing a bit of space from other areas, you can display more. If you want to keep the Date Navigator in the Navigation Pane, just drag the right and bottom borders to allow more room. Make My Calendars and Mail (also in the Navigation Pane) as small as possible to free up the most room.

If the Date Navigator is in the To-Do Bar (new to Outlook 2007), do the following:

  1. Right-click the bar and select Options.
  2. Change the Number of Months Row from 1 (the default) to 3 or 4 (up to 9).
  3. Click OK and Outlook will display four rows of calendars in the Date Navigator.

To see even more calendars in the Date Navigator, drag the border between the To-Do Bar (now mostly Date Navigator) and the calendar view to the left. Doing so will fill the Date Navigator with more monthly calendars, automatically. Changing the row option isn’t always enough–you might have to change your screen resolution to see them all.

Microsoft Word


Tips for wrapping text around a Word table

Most of us tend to layer a table between paragraphs of text–I know I usually do. The figure below shows the typical placement of a simple table in a document. The table follows a paragraph of explanatory or introductory text.

You might not realize that you can position a table in a paragraph and wrap text around the table. This next figure shows the result of dragging the table into the paragraph. By default, the table’s Text Wrapping property is None and the table aligns to the left margin of the page. When I dropped it into the paragraph, Word changed the property so Word could wrap the text around the table. Word does the best it can, but the results aren’t always a perfect fit. Fortunately, you’re not stuck.

The first thing you can do is move the table around a bit more–especially if the placement doesn’t have to be exact. By moving the table around just a little, you’ll probably hit upon a better balance. (Most likely, I wouldn’t break up the middle of a paragraph with a table, but for the sake of the example, please play along.)

Word does a good job of defining properties when you drag the table to position it. However, if a little drag action doesn’t produce a mix you can live with, you can force settings that are more exact. To access these properties, right-click the table, choose Table Properties, and click the Table tab (if necessary). First, make sure the Text Wrapping property is set to Around. If you want the table flush to the left or right, change the Alignment to Left or Right. The example table is centered.

Click the Positioning button. In the resulting Table Positioning dialog box, you can set the following properties:

  • The horizontal position of the table, relative to a column, margin, or page.
  • The vertical position of the table, relative to a paragraph, margin, or page.
  • The distance of the table from the surrounding (wrapped) text.
  • Whether the table should move with the text.
  • Whether the text can overlap the table.

The best way to learn about these properties is to just experiment. For instance, setting a Right property of 3 removes the text to the right of the table–remember when I said I probably wouldnot want a table to break up text? Well, this is one way to get the text inside the paragraph, without breaking up the text. I just reset one property!

As you experiment, you’ll probably find, as I have, that dragging a table around produces a pretty good balance. It’s good to know though, that you can force things along a bit by setting the positioning properties.

Remove virtual machine swap space on disk

The use case is rare, but it may be necessary to not utilize a virtual swap file for VMware vSphere virtual machines.

Each virtual machine in vSphere is subject to a number of memory management technologies, which include the balloon driver through VMware Tools, transparent page sharing on the host, memory compression, and hypervisor swapping. (The technologies are in order of most desirable to least desirable features.) The hypervisor swapping function makes the virtual machine’s memory inventory run on disk instead of addressable space in the host’s RAM inventory.

A virtual machine creates a swap file on disk; this is separate to anything that may be configured in the operating system, such as a Windows page file on the guest virtual machine. This swap file (Figure A) is equal to the memory allocation on the virtual machine.

Figure A

Click the image to enlarge.

This particular virtual machine only has 4 MB of memory (it is a low-performance test system), but the 4 MB of RAM is also represented on the VMFS datastore (LUN-RICKATRON-1) as a .vswp file. While the 4 MB for this virtual machine is not too impactful on most storage systems, larger virtual machine memory provisions can chew up datastore space and (hopefully) never be used.

If you don’t want to have this .vswp file on the storage at all, there is one way to prevent the virtual machine from representing the physical memory allocation on disk. Using a memory resource reservation for the entire amount of memory for the virtual machine would not allow it to power-on unless the host can provide, exclusively, the reserved amount of memory. In that situation, the guest would never result to swapping (Figure B).

Figure B

Click the image to enlarge.

Once the reservation is made, the next time the virtual machine is powered on, it will not claim the .vswp file on the datastore.

Note: This configuration should be used in very specific situations, such as a tier 1 application that you will forgo the benefits of VMware memory management to ensure the absolute highest performance.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

How to market your internal IT department

Marketing their organization is something most CIOs spend little time considering. While no one likes a shameless promoter, many of the most successful IT organizations I have worked for actively market themselves around the corporation, even if they may not use the term “marketing” to describe their activities.

Without some element of marketing, IT will often be neither seen nor heard, unless summoned, save for the rumor-mill rehashing of its most recent stumble or failure. With some simple marketing efforts, the company as a whole can be reminded of the services IT can offer, informed of recent successes, and be seen as a home to thought leadership on technology.

Here are a few simple ways to market your internal IT organization with little to no marketing budget and a minimal investment of time.

Change your attitudeThe most effective leaders in any organization are those who can sell their vision. While it may seem crass to call every great leader an effective salesperson, it is largely true. Effective leaders can pitch their point, expound on the benefits that are most likely to appeal to the current listener, and then “close the deal” with the support of much of the organization.

This “sales” attitude permeates everything from management presentations, to structuring organizational efforts that appeal directly to potential “customers”. IT, especially, is a group that peddles ideas and every interaction with other business units is a chance to pitch your most compelling ideas can do wonders for how you structure a proposal and present its benefits.

While something like enterprise software might affect the whole organization, a change in attitude will cause you to present the package differently to, say, the operations team rather than to the finance department. This will cause you to have laser-like focus on appealing to the listener’s interests, rather than self-centered technical discussions or questionable and unconvincing “benefits”.

Drop the jargonThe most effective marketing reaches us in a language we can easily understand. The same product description will use different language and imagery when targeted at one group versus another, but in each case will appeal to those groups in their own terms.

While IT professionals like us may get excited by talk of virtualized cloud services and ITIL frameworks, the people impacted by these technologies usually care less about the fancy verbal footwork and simply want to know how their working lives will be improved by what we are peddling. When we can separate the benefits from the technologies that deliver them and effectively articulate those benefits, then IT will be best presented and most easily accepted and embraced.

Become a thought leaderTechnology, especially in the consumer space, is changing at a record pace. Most of us have been cornered and asked for an opinion on some new gadget or technology making the press’s rounds.

Rather than waiting for these ad hoc “hallway moments”, publish an informal newsletter that talks about some of IT’s recent successes that address current technology trends. There’s no shame in having a young staffer who is passionate about the latest mobile technology pen a couple of paragraphs about how Android could affect the company or how some apps could help the iPad become a productivity tool. If CIOs are not presenting this information, executives may be looking to teenage children or staffers outside IT, making corporate IT look like a dated dinosaur rather than a trend-spotter.

An IT newsletter need not be an overwrought, 10-page affair with marvelous graphics. It can start as a simple four or five paragraphs that are e-mailed to a handful of colleagues. The best newsletters are usually informal and informational that address the concerns of readers. Ask a trusted colleague or two what technologies they are following and interested in learning more about. Combine this with short and subtle promotional features about IT’s recent successes, and you have a winning formula that presents IT as competent and knowledgeable. Old-fashioned e-mail is usually a better tool than a blog buried on an internal Web site that few will read, and if you are comfortable with it, self-effacing humor and an informal style will gain more readers than a staid yawner that reads like a thesis.

While marketing is probably one of the last things you thought you would need to worry about as an IT executive, any organization, whether it is a Fortune 100 company or an IT department of five people at a small company, can benefit from being presented in the best possible light. Dedicating four or five hours each month to these activities can build trust in the IT department, improve its image, and even make the next budget-approval process far less painful.

Patrick Gray is the founder and president of Prevoyance Group, and author of Breakthrough IT: Supercharging Organizational Value through Technology. Prevoyance Group provides strategic IT consulting services to Fortune 500 and 1000 companies.

Disable Windows Update for device driver installation

When new hardware is installed on a Windows Server, there are a number of options to consider, such as which driver to use and whether to utilize Windows Update for the driver. There are options for specifying this behavior.

For standalone systems, the Device Installations settings option from System properties can dictate behavior for device installation. Figure A shows this option for standalone systems.

Figure A

If the option needs to be centrally managed for a number of computer accounts, Group Policy can configure this centrally. Device installation behavior can be managed in Group Policy in the Computer Configuration | Policies | Administrative Templates | System | Device Installation section. From there, the Specify Search Order For Device Driver Source Locations value and a number of other behavior values can be configured to dictate how drivers are installed on servers. In the case of disabling Windows Update, enabling this value is shown in Figure B.

Figure B

It can be very important to set this type of configuration for client systems as well as server systems. Storage systems, for example, may be very peculiar on supported versions of drivers for devices such as fiber channel host bus adapters (HBAs) that work with a storage processor driver that manages multipathing.

Other devices such as tape drives may have specific driver requirements for interaction with HBAs, SCSI, or SAS controller interfaces. On the client side, printers can be the primary target on driver revision control. This same area of Group Policy can be used to specify additional options on device installation behavior.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

Mobile banking apps may be vulnerable

Banking apps for mobile devices are increasing in popularity. Estimates by financial services firm TowerGroup suggest there will be 53 million people using mobile banking apps by 2013.

My bank recently rolled out its own iPhone app. I downloaded it and was just about to check it out. Then, paranoia. If you read my article about whether online banking is safe or not, you will understand. What do I know about this app?

So, I started looking into mobile banking apps. It did not take long to find out security advocates also have their concerns. Spencer Ante of the Wall Street Journal raises a warning in his article: “Banks Rush to Fix Security Flaws in Wireless Apps“. Here is the lead paragraph:

“A number of top financial companies and banks such as Wells Fargo & Co., Bank of America Corp., and USAA are rushing out updates to fix security flaws in wireless banking applications that could allow a computer criminal to obtain sensitive data like usernames, passwords, and financial information.”

The same article mentioned viaForensics, a company specializing in securing mobile applications, as the firm responsible for discovering the vulnerabilities. Good for them. My question is, why is this even happening? It is not complicated. Our banking credentials should be considered sacred, period.

On a good note, viaForensic’s Web site mentions their researchers are working with the affected financial institutions: “Since Monday (11/01/2010), we have been communicating and coordinating with the financial institutions to eliminate the flaws.”

The blog post goes on to say: “Since that time, several of the institutions have released new versions and we will post updated findings shortly.”

In the quote, viaForensics mentioned publishing new test results. That refers to their online service called appWatchdog.

Within days, and to their credit, most of the banking firms pushed out updates to remove the vulnerabilities. The following appWatchdog slide displays the results from testing Wells Fargo’s app for Android phones on November 3, 2010:

Three days later, the same Android app from Wells Fargo passed every test:

Why worry then?
It appears mobile banking applications are getting fixed. It also was pointed out that viaForensics found vulnerabilities, not actual attacks. So there is nothing to worry about. Not quite, I talked to experts that disagree.

One researcher in particular voiced the following concerns:

  • Most mobile devices are so new, security apps are not available.
  • Keeping member’s banking information secure should be a no-brainer, yet it is not so.
  • PCs are still a target-rich environment, so criminals are not yet focused on creating mobile phone malware.

The researcher’s first two concerns rang true. The third concern intrigued me, meaning I need to learn more about that. I came across this article, quoting Sean Sullivan of F-Secure. So far in 2010, F-Secure detected 67 strains of smartphone malware compared to thousands aimed at PCs.

The difference is insignificant, but Sullivan also mentioned this year’s total was nearly double last year’s. So, stay tuned.

What’s the answer?
For right now, if banking online is a must, using a dedicated PC, LiveCD, or a bootable flash drive are still the best solutions.

Final thoughts
Not sure what it all means–is it FUD or are we making the same mistakes we do banking online with PCs? What do you think?

Update (Nov. 29, 2010):
Andrew Hoog, chief investigative officer for viaForensics, contacted me. They tested five new mobile applications: Groupon, Kik Messenger, Facebook, Dropbox, and Mint.com. All the applications failed to securely store username and application data. More troubling, four applications: Groupon (Android), Kik Messenger (Android), Kik Messenger (iPhone), and Mint.com (Android) were storing passwords as plain text.

Michael Kassner has been involved with IT for over 30 years and is currently a systems administrator for an international corporation and security consultant with MKassner Net.

Most important updates in Red Hat Enterprise Linux 6

On Nov. 10, Red Hat unveiled the latest version of Red Hat Enterprise Linux (RHEL): version 6. Version 5 was released in March 2007, so it has been a long road to produce the latest version.

Due to the length of time between releases, RHEL6 is a system that is quite unlike RHEL5. Obviously it comes with newer versions of software across the board, something welcomed among those that find RHEL5 a little long in the tooth. Keeping in mind that “bleeding edge” doesn’t necessarily belong in an enterprise platform, it is nice to have more recent software along with the inevitable feature enhancements.

Cloud computing
One of the big focuses on RHEL6 is cloud computing. This involves a number of factors, and a lot of work has gone into it to not only make it viable, but highly competitive with other offerings.

Performance enhancements abound, making it very efficient and scalable not only for current hardware, but also hardware yet to come For example, systems with 64TB of physical memory and 4096 cores/threads are not typically in use today, but RHEL6 will support it, out of the box, when they are.

While performance is definitely one area of cloud computing, another area is virtualization, and this is where KVM becomes a direct competitor to other virtualization solutions from vendors such as VMware. Using KVM and libvirt, RHEL6 provides a great virtualization management infrastructure with a really powerful virtualization solution–all baked right into the operating system (OS) for no extra cost.

Security
That said, the thing I am most passionate about is security. Perhaps it’s an odd thing to be so interested in, but it’s both a hobby and a profession for me, so the security features in RHEL6 are really important. And they will be important to anyone with a public or private cloud because heavy virtualization and cloud computing make proactive security ever more important.

While RHEL has provided SELinux for a long time, RHEL6 provides further SELinux support and policies, making it easier to use now than in previous versions of RHEL. But SELinux is just one piece of the puzzle, and it’s a complex one at that. While great strides have been made to make it easier, many people still opt to turn it off rather than figure out how to make it do what they want. So this is where other security enhancements come into play.

While RPM packages have always been signed, RHEL6 now uses the SHA-256 algorithm and a 4096-bit RSA signing key to sign packages. This provides users with greater confidence that packages are legitimate and authentic, compared to the weaker MD5 and SHA-1 algorithms that were used in previous versions.

Other security features that have either been written in previous versions and improved upon or are new to RHEL6 include  various binary proactive protection mechanisms. This includes using GCC’s FORTIFY_SOURCE extensions, this time including coverage for programs written in C++. It also includes glibc pointer encryption, SELinux Executable Memory Protection, all programs compiled with SSP (Stack Smashing Protection), ELF binary data hardening, support for Position Independent Executables (PIE), and glibc heap/memory checks by default.

In the kernel are protections like NX (No-Execute) protection by default, restricted access to kernel memory, and Address Randomization (ASLR). The kernel also features support for preventing module loading, GCC stack protection, and write-protected kernel read-only data structures.

With all these features, it is clear to see that proactive security has been taken seriously in RHEL6, and that a lot of work has gone into making RHEL a secure OS suitable for any environment you throw at it: virtual, physical, or cloud. When you include new application features as a result of newer versions of software, the thousands of bugs fixed, the standard 7-year support lifecycle (with an optional extension to 10 years)–all of this makes RHEL6 highly suited to enterprise deployment.

Yes, I am biased towards Red Hat as I am a company employee, but I’m also confident in what RHEL6 brings to the table and willingly stand behind it.

Vincent Danen works on the Red Hat Security Response Team and lives in Canada. He has been writing about and developing on Linux for over 10 years.

Latest Security applications News

Posted: December 9, 2010 in Security

‘Trojanized’ Google Android security tool found in China

Suspicious code is lurking in a repackaged Chinese version of a tool Google released last weekend to remotely clean malicious apps off Android phones, Symantec said Thursday.

This “trojanized” package was found on an unregulated third-party Chinese marketplace and not on the official Android Market, the security vendor said in a blog post.

After 58 malicious apps were found on the Android Market last week and downloaded onto about 260,000 devices, Google removed the apps from the market and then wiped them from the phones too.

Now, Symantec says someone appears to have taken the “Android Market Security Tool” used to clean up the devices infected with the malware, repackaged it and inserted code in it that seems to be able to send SMS messages if instructed by a command-and-control server.

It also looks like the code used in the new threat is based on a project hosted on Google Code and licensed under the Apache License, according to Symantec.

A Google spokesman provided this statement when asked for comment: “We encourage Android users to only install applications from sources they trust.”

Several things should raise red flags for people with this threat–it’s not on the official, trusted Android Market and it requires a user to install it whereas the Google tool used an automatic push function to distribute the legitimate app.

The initial malware found on the Android Market, dubbed “DroidDream”, not only could capture user and product information from a device but also had the ability to download more code capable of further damage.

“We have added detection for the trojanized version of Google’s application as Android.Bgserv,” Symantec said.

Meanwhile, a Kaspersky researcher has questioned the efficacy and methods of Google’s Android security tool itself.

Study: Negligence cause of most data breaches

Negligence is the biggest cause of data breaches at corporations, but criminal attacks are growing fastest, a study released Wednesday concludes.

The average cost of a data breach for a victimized organization increased to US$7.2 million, and the average cost per record came to US$214, up US$10 from the previous year, according to the 2010 Annual Study: U.S. Cost of a Data Breach, which was conducted by the Ponemon Institute and based on data supplied by 51 U.S. companies across 15 different industry sectors.

The costs associated with a breach involve detecting the incident, investigation, forensics, customer notification, paying for identity-protection services for victims, business disruption, and productivity losses, said Larry Ponemon, chairman and founder of the Ponemon Institute. A record can contain only one piece of information on an individual or multiple pieces of data, including social security number, contact information, driver’s license number, purchasing habits, and account number, he said.

Malicious or criminal attacks are the most expensive and make up the fastest-growing category, with 31 percent of all breaches involving malice or crime. Negligence was the most common threat, with 41 percent of all breaches, according to the study, which was sponsored by Symantec.

The most expensive breach reported in the study was US$35.3 million, and the least expensive was US$780,000.

The companies have devised an online Data Breach Calculator for helping estimate how likely a breach is and how much a breach would cost based on an organization’s size, industry, location, and security practices.

Report: Malware-laden sites double from a year ago

More than 1 million Web sites were believed to be infected with malware in the fourth quarter of last year, nearly double from the previous year, according to figures released today by Dasient.

Malvertising, advertising containing malware, also is on the rise, with impressions doubling to 3 million per day from the third quarter of 2010, Dasient said in a blog post.

“The probability that an average Internet user will hit an infected page after three months of Web browsing is 95 percent,” the company said.

The news corresponds with information released this week by another security firm. An analysis of than 3,000 Web sites across 400 organizations last year found that 44 percent of them had serious vulnerabilities at all times, while 24 percent were frequently vulnerable for an average of at least 270 days a year, according to WhiteHat Security, which provides Web site testing and security services for companies. Meanwhile, only 16 percent of the sites examined were found to be rarely vulnerable, the report said.

About 64 percent of those sites had at least one information leakage vulnerability, which inched past Cross-site scripting as the most prevalent vulnerability, WhiteHat said.

Neither WhiteHat nor Dasient identified the Web sites they analyzed or disclosed whether any of the biggest Web brands were among those with malware or vulnerabilities.

Dasient researchers wanted to see how easy it would be to spread malware on social-networking sites and created some test accounts to spread various types of links. More than 80 percent of the dozen unidentified sites it tested allowed through links that were on Google’s Safe Browsing list, while all of them allowed through links that led to a benign drive-by download.

In another test, the researchers posted an ad whose click-through links led to a benign drive-by download and found that the social-networking site kept the ad up for more than three weeks before pulling it. The ad had the headline “Click for a security test”, led to a site at “hackerhome.org,” and said a Windows calculator would pop up if the computer was vulnerable.

China-related DoS attack takes down Codero-hosted Web sites

A distributed denial-of-service attack that affected thousands of customers at Codero and other hosting providers appeared to come from within China and to be launched at a Chinese site that is critical of communism or its Domain Name System provider, Codero said Tuesday.

The disruptions that took Codero’s customers offline for most of the morning were collateral damage in the attack, Ryan Elledge, chief operating officer at Codero, told ZDNet Asia’s sister site CNET.

Directly in the path of the attack was a Codero customer that hosts DNS records for sites on the Internet, including a Web site critical of communism that appeared to be the ultimate end target, he said. At least three other hosting providers for that Web site were also affected by the attack, he said. Elledge declined to name any of the companies involved or the Web site.

Meanwhile, all of Codero’s customers were back up by 1 p.m. PT, according to Elledge.

About 5,000 servers in its Phoenix data center were affected, which meant slowdowns or outages for at least that many customers, Elledge said. He could not say how many customers had been affected in total.

Initially, Codero thought the problem was due to issues with one of its upstream providers, but that turned out not to be the case, he said. “We were receiving more than 1.5 million packets per second in the attack. It paralyzed our core routers, and our upstream providers were unable to pinpoint where the target IPs were,” he said.

The company reported problems beginning about 7:30 a.m. PT. “We are experiencing network issues affecting part of our PHX data center,” the company posted on its Twitter page. “Engineers are working with upstream providers.”

“Another attempt is now under way at routing traffic to specific segments of our network,” Codero tweeted around 9:30 a.m. PT.

Codero, which has points of presence in Irvine, Calif.; Denver; Chicago; and Ashburn, Va., is migrating a data center from San Diego to Phoenix. Only the Phoenix location was affected by the attack, Elledge said.

Google confirms it pulled malicious Android apps

After several days of silence on the issue, Google has confirmed it removed several malicious apps from its Android Market earlier this week and said it would remove the apps from users’ devices as well.

Only devices running an Android version earlier than version 2.2.2 were susceptible to the rogue apps, which took advantage of known vulnerabilities, the Internet giant reported yesterday in company blog. The company believes the only information that was accessed by the apps were the unique codes used to identify the the specific device and the version of Android that it was running.

Fifty-eight malicious apps were identified and removed but not before they were downloaded to about 260,000 devices, according to a TechChrunch report. Google said it would use a kill switch to remotely remove the apps from users’ devices and push an Android security update to affected users to repair the damage done by the apps. Affected users can expect to receive an e-mail from Android Market support explaining the action, Google said.

The developer accounts associated with the apps were suspended and law enforcement officials were contacted, Google said.

Earlier this week, a Reddit user discovered that pirated versions of legitimate apps on the Android Market were infected by a Trojan called DroidDream, which uses a root exploit dubbed “rageagainstthecage” to compromise a device, according to a report on enthusiast site Android Police.

The malware was described as especially virulent because it apparently cannot only capture user and product information from a device but also has the ability to download more code capable of further damage.

Google representatives did not immediately respond to a request for further information or comment.

DDoS attacks harmless: Anonymous user

Distributed denial-of-service (DDoS) attacks are harmless, according to Australian Matthew George, who was charged for his role in the Anonymous group’s bid to crash federal government websites last year.

George was one of possibly hundreds of Australians under the Anonymous banner who participated in DDoS protest attacks against the Australian Parliament House and Department of Broadband, Communications and the Digital Economy Web sites. Melbourne resident Steve Slayo was the only other user charged for participating in the attacks.

For his role, George faced 10 years imprisonment for “causing unauthorized impairment of electronic communication to or from a Commonwealth computer”, but received a US$550 fine with a recorded conviction. Federal police raided George’s home in June last year and he faced court in October.

Speaking to ZDNet Asia’s sister site ZDNet Australia, George rebuked comments by the Australian Federal Police that sentences for DDoS attacks are too weak, instead saying that the act does not cause permanent damage.

“DDoS service attacks are harmless. Most hosting companies have DDoS attack precautions in place and there is no long-term damage caused to any servers or Web sites,” George said.

“It is far different to hacking in and defacing or rooting a server [because] when the DDoS attack is stopped everything goes back to normal as if nothing had ever happened.”

“You can’t compare DDoS attacks to child porn, hacking or writing a virus–it’s like comparing apples with oranges.

“As far as saying that the sentence was too weak, maybe they should pass that on to the district public prosecutors as [it] agreed that the sentence was fair in my case.”

AFP High Tech Collection and Capability manager Grant Edwards told a security conference this month that the courts are unwilling to issue tougher sentences for DDoS attacks because “they don’t understand the threat”.

Edwards cited the penalties handed to George and Slayo, who received a good behavior order, as examples of soft sentences.

George said the criminal conviction may make it harder for him to gain employment opportunities.

He said he believes most participants in the DDoS attacks were from Australia. The AFP has refused to confirm if it is investigating other users for their role in the attacks. It had not received requests by the likes of MasterCard and Visa, which were hit with DDoS attacks for blocking funds to whistleblower Web site Wikileaks.

A ZDNet global poll found that readers do not support DDoS attacks on companies that cut off Wikileaks.

This article was first published at ZDNet Australia.

WordPress hit with second big attack in two days

The popular blogging-site hoster WordPress was hit with another distributed denial-of-service last Friday, the second in two days.

“Unfortunately, the DDoS attack from yesterday returned in a different form this morning and affected sitewide performance,” the company said in a notice on its Automattic site, which serves as a dashboard for the service. “The good news is that we were able to mitigate it quickly and performance returned to normal around 11:15 UTC. We are continuing to monitor the situation closely.”

Stats on Automattic.com show that the site was affected for about an hour or so starting around 3:15 a.m. PST. One day earlier, WordPress was hit with an attack that reached “multiple Gigabits per second and tens of millions of packets per second,” hampering the company’s three data centers and disrupting nearly 18 million hosted blogs and members of its VIP service, including the Financial Post and TechCrunch.

Typically, DDoS attacks are accomplished using botnets of thousands of compromised computers that are directed to a target Web site with the motivation of overwhelming the site and taking it offline.

WordPress did not provide many details about either attack, but founder Matt Mullenweg told ZDNet Asia’s sister site CNET on Thursday that the first attack may have been politically motivated against one of the site’s non-English blogs. He did not immediately respond to an e-mail seeking comment on Friday.

Expert: Android Market should scan for malware

Android Market apps should be scanned for traces of malware to protect Android customers from downloading apps that look legitimate but are in fact malicious, a security expert said.

Last week Google removed a bunch of malicious apps, most disguised as legitimate apps, from the Android Market after they were found to contain malware. The malware, dubbed DroidDream, uses two exploits to steal information such as phone ID and model, and to plant a back door on the phone that could be used to drop further malware on the device and take it over.

“At a minimum, they have to do signature-based scanning for known malware,” said Chris Wysopal, chief technology officer at Veracode, an application security provider. “DroidDream is now a malware kit and it would be easy for people to make variations of it and insert it into new software.”

But traditional signature-based antivirus software isn’t good at detecting brand new malware or existing malware that has been modified enough to slip past the antivirus programs. To catch something like DroidDream then, behavioral-based antivirus scanning should also be used, according to Wysopal.

“Downloading and installing additional software onto the device outside of the app store is the kind of behavior that should be scanned for,” he said.

A Google spokesman declined to comment beyond confirming that the company had removed some apps and disabled several developer accounts for violating Android Market policies.

Most if not all of the 55 or so apps that were pulled from the Android Market were repackaged versions of legitimate apps, said Kevin Mahaffey, chief technology officer at Lookout, which provides security software and services for Android, BlackBerry, and Windows. This means that even more cautious Android users could have been more easily duped into downloading one of the apps, he said. (Symantec has a list of some of the apps removed from the Android Market here.)

Depending on the handset used, Android versions may be patched by now, but others are not, he said. The vulnerabilities exploited by the malicious apps have been patched in Android 2.3, also known as Gingerbread, but older versions could still be vulnerable, according to Mahaffey.

It’s not clear whether DroidDream did in fact download any software onto devices that installed any of the malicious apps. The command-and-control server the malware set up to communicate with the victim devices is offline now and “we haven’t seen any evidence that the server was pushing apps to the devices,” Mahaffey said.

It’s also a mystery who is behind the malicious apps, but there’s a possibility it’s someone in China as the malware was also found on alternative Android marketplaces that target Chinese users, he said.

Cleanup can be a pain; in addition to removing the app, any additional software it may have hidden in the device must be wiped. Lookout can walk Android users who need help through the cleanup process, Mahaffey said.

The Android Market is flourishing, with the number of apps growing faster than the iPhone market, according to Lookout. Android also has greater overall market share of mobile operating systems in the U.S. (29 percent) than Apple’s iOS and Blackberry (both 27 percent), Nielsen announced last week.

Much of the success of the platform is due to the fact that the operating system is open-source and thus attracts a large number of developers. The openness of Android’s platform fosters innovation, but leaves much of the responsibility for security on the shoulders of Android customers, experts say. (More details on the different security models between Android and iPhone is here.)

In one analogy Wysopal has come across, the iPhone environment has been likened to Disney World and Android to New York City. You might not have as much freedom and choice at Disney World, but you probably feel safer.

“How are people who don’t read CNET supposed to know that they need to do something on their phone to bring it back to its factory state because it’s been compromised” by a malicious app, Wysopal said. Apple could send a warning out to all iPhone users if it needed to but that can’t happen on the Android because of all the different flavors of the operating system running on the different handsets, he said.

This may be the first time Google has removed malicious apps from the Android Market, but it’s not the first time apps have been pulled. Last year two proof-of-concept apps designed to test how easy it would be to distribute an innocuous program that could later be made malicious were removed. Later in the year Google pulled another app the same researcher created to illustrate a flaw in the mobile framework that allowed apps to be installed without a user’s knowledge. That hole also was plugged.

WordPress hit by ‘extremely large’ DDoS attack

Blog host WordPress.com was the target of a distributed denial-of-service (DDoS) attack earlier today described by the company as the largest in its history.

As a result, a number of blogs–including those that are a part of WordPress’ VIP service–suffered connectivity issues. That includes the Financial Post, the National Post, TechCrunch, along with the service’s nearly 18 million hosted blogs.

According to a post by Automattic employee Sara Rosso on the company’s VIP Lobby (which had been down at the time of the attacks, though was archived by Graham Cluley over at Naked Security), the size of the attack reached “multiple Gigabits per second and tens of millions of packets per second”. Rosso had also said putting a stop to the attack was “proving rather difficult”.

Rosso had also said the company would be handling its VIP sites ahead of general users.

Denial-of-service attacks are designed to overwhelm Web sites with requests, effectively shutting them down. The ones that are distributed present a much larger challenge to combat, since they can come from a wider variety of networks and hosts.

In an e-mail to ZDNet Asia’s sister site CNET, WordPress founder Matt Mullenweg said the attack had affected three of the company’s data centers, and was the largest its seen in the company’s six-year history. Mullenweg also said that the attack “may have been politically motivated against one of our non-English blogs”, but that that detail had not been confirmed. Full e-mail below:

There’s an ongoing DDoS attack that was large enough to impact all three of our data centers in Chicago, San Antonio, and Dallas–it’s currently been neutralized but it’s possible it could flare up again later, which we’re taking proactive steps to implement.

This is the largest and most sustained attack we’ve seen in our six-year history. We suspect it may have been politically motivated against one of our non-English blogs but we’re still investigating and have no definitive evidence yet.

WordPress later updated that the problem has been fixed. “Our systems are back to normal. We’ll continue to monitor them and post updates here if needed,” the company said on its status page. No word yet on if the company had gotten to the bottom of which of its blogs had been the target of the attack.

Google pulls infected apps from Android Market

Google has taken down more than 50 infected programs from its official app store, Android Market.

The apps contained malware called DroidDream hidden in seemingly legitimate apps and were pulled on Tuesday, mobile security company Lookout said in a blog post on Wednesday. Between 50,000 and 200,000 users downloaded the infected apps, said the company.

“Unlike previous instances of malware in the wild that were only available in targeted alternative app markets, DroidDream was available in the official Android Market in addition to alternative markets, indicating a growing need for Android users to take extra caution when downloading apps,” the blog post said.

Read more of “Google pulls infected apps from Android Market” at ZDNet UK.

Air traffic control system ‘not safe’, say UK controllers

Technology being introduced at one of the two major U.K. air traffic control hubs is “not fit for purpose” and did not adequately handle a breakdown in air traffic communications, according to a number of air traffic controllers.

The EFD (Electronic Flight Data) system rolled out at the Scottish and Oceanic Air Traffic Control (ATC) Centre at Glasgow Prestwick Airport has had difficulty handling complex inputs, according to people posting on an air traffic control forum.

“[Controllers] don’t want to use this system, not because they like to have a whinge, but because they know it is neither safe, nor efficient enough to do the job,” wrote one Prestwick controller, Arty-Ziff, on the Pprune forum in February. “This system should have been tested properly before it went into live operations.”

Read more of “Air traffic control system is ‘not safe’, say UK controllers” at ZDNet UK.

Microsoft fixes hole in its antivirus engine

Microsoft has plugged a hole in its antivirus and antispyware software that could allow an attacker authenticated on the local system to gain LocalSystem privileges.

The fix for the privilege escalation vulnerability is included in an update to the Microsoft Malware Protection Engine. Since the malware protection updates are automatically applied, most end users and administrators won’t need to do anything, Microsoft said in its advisory, issued Wednesday. The update should be applied within 48 hours of the advisory release, or by the weekend.

The vulnerability is rated “important” for Windows Live OneCare, Microsoft Security Essentials, Windows Defender, Microsoft Malicious Software Removal tool, Forefront Client Security, and Forefront Endpoint Protection 2010.

“The update addresses a privately reported vulnerability that could allow elevation of privilege if the Microsoft Malware Protection Engine scans a system after an attacker with valid log-on credentials has created a specially crafted registry key,” the advisory says. “An attacker who successfully exploited the vulnerability could gain the same user rights as the LocalSystem account. The vulnerability could not be exploited by anonymous users.”

Workstations and terminal servers are primarily at risk, Microsoft said.

Apple shares Mac OS X Lion with security experts

Apple not only released a preview of its next operating system, Mac OS X Lion, to developers on Thursday, the company is also giving it to security experts for review.

“I wanted to let you know that I’ve requested that you be invited to the prerelease seed of Mac OS X Lion, and you should receive an invitation soon,” said a letter sent by Apple to an unknown number of security researchers. “As you have reported Mac OS X security issues in the past, I thought that you might be interested in taking a look at this. It contains several improvements in the area of security countermeasures.”

Dino Dai Zovi and several other researchers tweeted about being invited to try out the prerelease version of the new Mac OS. “This looks to be a step in the direction of opening up a bit and inviting more dialogue with external researchers,” Dai Zovi wrote. “I won’t be able to comment on it until its release, but hooray for free access!”

I asked Charlie Miller, another expert on Mac security, if this was the first time Apple had offered to show an OS preview to security experts, and what the significance is.

“As far as I know they have never reached out to security researchers in this way. Also, we won’t have to pay for it like everybody else,” he wrote in an e-mail. “It’s not hiring us to do pen-tests of it, but at least it’s not total isolation anymore, and at least security crosses their mind now.”

“I haven’t downloaded it yet, but if I had, I couldn’t talk about it,” he added. “Damn NDAs.”

Google flags London Stock Exchange site for malware

Google has temporarily flagged up the London Stock Exchange’s website as a malware danger, due to a third-party advertiser on that site hosting malicious software.

The issue came up on Sunday, a spokesperson for the London Stock Exchange (LSE) told ZDNet Asia’s sister site ZDNet UK. “We were previously carrying an advert from a third-party provider,” a spokesperson said on Monday. “That advert, if you clicked through to the third-party website, had a flag up as being a virus or something similar. We’ve obviously taken the advert down off our website.”

According to Google’s Safe Browsing diagnostic page, a visit to a page on the LSE site on Saturday resulted in malicious software being downloaded and installed without user consent. The malware was hosted on a site called stripli.com, while two others — unanimis.co.uk and borsaitaliana.it — appeared to be “functioning as intermediaries for distributing malware to visitors of this site”, Google said.

Read more of “Google flags London Stock Exchange site for malware” at ZDNet UK.

Microsoft fixes hole in its antivirus engine

Microsoft has plugged a hole in its antivirus and antispyware software that could allow an attacker authenticated on the local system to gain LocalSystem privileges.

The fix for the privilege escalation vulnerability is included in an update to the Microsoft Malware Protection Engine. Since the malware protection updates are automatically applied, most end users and administrators won’t need to do anything, Microsoft said in its advisory, issued Wednesday. The update should be applied within 48 hours of the advisory release, or by the weekend.

The vulnerability is rated “important” for Windows Live OneCare, Microsoft Security Essentials, Windows Defender, Microsoft Malicious Software Removal tool, Forefront Client Security, and Forefront Endpoint Protection 2010.

“The update addresses a privately reported vulnerability that could allow elevation of privilege if the Microsoft Malware Protection Engine scans a system after an attacker with valid log-on credentials has created a specially crafted registry key,” the advisory says. “An attacker who successfully exploited the vulnerability could gain the same user rights as the LocalSystem account. The vulnerability could not be exploited by anonymous users.”

Workstations and terminal servers are primarily at risk, Microsoft said.

Apple shares Mac OS X Lion with security experts

Apple not only released a preview of its next operating system, Mac OS X Lion, to developers on Thursday, the company is also giving it to security experts for review.

“I wanted to let you know that I’ve requested that you be invited to the prerelease seed of Mac OS X Lion, and you should receive an invitation soon,” said a letter sent by Apple to an unknown number of security researchers. “As you have reported Mac OS X security issues in the past, I thought that you might be interested in taking a look at this. It contains several improvements in the area of security countermeasures.”

Dino Dai Zovi and several other researchers tweeted about being invited to try out the prerelease version of the new Mac OS. “This looks to be a step in the direction of opening up a bit and inviting more dialogue with external researchers,” Dai Zovi wrote. “I won’t be able to comment on it until its release, but hooray for free access!”

I asked Charlie Miller, another expert on Mac security, if this was the first time Apple had offered to show an OS preview to security experts, and what the significance is.

“As far as I know they have never reached out to security researchers in this way. Also, we won’t have to pay for it like everybody else,” he wrote in an e-mail. “It’s not hiring us to do pen-tests of it, but at least it’s not total isolation anymore, and at least security crosses their mind now.”

“I haven’t downloaded it yet, but if I had, I couldn’t talk about it,” he added. “Damn NDAs.”

Facebook seeking encryption for apps, mobile

In response to complaints that a recent announcement of secure connections doesn’t go far enough, Facebook said today that it’s planning to roll out additional changes that would shield mobile devices and all apps from eavesdropping.

Last month, Facebook began offering the ability for users to turn on HTTPS (Hypertext Transfer Protocol Secure) to encrypt all communications with the site. However, F-Secure and others have noticed that some apps require users to switch to a regular HTTP connection to use the app, but don’t warn users that the switch then becomes permanent.

Asked for comment, a Facebook representative said the company is working to make it so that the switch to unencrypted communications is only temporary and that Facebook is encouraging developers to write apps that support HTTPS.

“We are pushing our third-party developers to begin supporting HTTPS as soon as possible. We’ve provided an easy way for third-party developers to encourage to do this, and we hope to transition to fully persistent HTTPS soon,” the rep said in an e-mail. “However, we recognize that there is currently too much friction in this process and we are iterating on the flow so that the setting will only be temporarily disabled for that session. The account will then return to HTTPS on the next successful log in. We are testing this flow now and hope to launch it in the near future.”

Also this week, a computer science professor at Rice University demonstrated that his Motorola Droid X running Android could be eavesdropped on with the right sniffing software. Dan Wallach ran the Wireshark network protocol analyzer and Mallory proxy in his undergraduate security class a few days ago. He found that Facebook sends data (except log-in credentials) in the clear, even though he has his Facebook account set to use HTTPS whenever possible, he wrote on the Freedom to Tinker blog.

Asked for comment, the Facebook representative said the company is working to provide Secure Sockets Layer (used in HTTPS) on mobile platforms in coming months.

“After launching SSL for the site, we are still testing across all Facebook platforms, and hope to provide it as an option for our mobile users in the coming months,” the rep said in a statement. “As always, we advise people to use caution when sending or receiving information over unsecured Wi-Fi networks.”

Wallach also found that Google Calendar traffic is not encrypted. In response, a Google representative said, “We plan to begin encrypting traffic to Google Calendar on Android in a future maintenance release. When possible, we recommend using encrypted Wi-Fi networks.”

(A tip of the hat to Dan Goodin at The Register.)

EU outlines shortcomings in UK data law

The European Commission has revealed details of where it sees shortfalls in U.K. data law, as it considers whether to take action against the British government over the matter.

Data protection expert Chris Pounder received the information from the Commission as part of a long-running Freedom of Information exchange. In a blog post earlier this week, he shared the details of a letter sent to him by the European body, outlining where the U.K. Data Protection Act does not meet the requirements of the European Union’s Data Protection Directive.

“This case concerns an alleged failure of the U.K. legislation to implement various provisions of the Directive 95/46/EC on data protection,” the Commission said in the letter dated Feb. 16 (PDF). “As we have already informed you, the provisions concerned are Articles 2, 3, 8, 10, 11, 12, 13, 22, 23, 25 and 28 of that Directive.”

Read more of “EU outlines shortcomings in UK data law” at ZDNet UK.

US agents seek new ways to bypass encryption

SAN FRANCISCO–When agents at the Drug Enforcement Administration learned a suspect was using PGP to encrypt documents, they persuaded a judge to let them sneak into an office complex and install a keystroke logger that recorded the passphrase as it was typed in.

A decade ago, when the search warrant was granted, that kind of black bag job was a rarity. Today, however, law enforcement agents are encountering well-designed encryption products more and more frequently, forcing them to invent better ways to bypass or circumvent the technology.

“Every new agent who goes to the Secret Service academy goes through a week of training” in computer forensics, including how to deal with encrypted files and hard drives, U.S. Secret Service agent Stuart Van Buren said at the RSA computer security conference last week.

One way to circumvent encryption: Use court orders to force Web-based providers to cough up passwords the suspect uses and see if they match. “Sometimes if we can go in and find one of those passwords, or two or three, I can start to figure out that in every password, you use the No. 3,” Van Buren said. “There are a lot of things we can find.”

Last week’s public appearance caps a gradual but nevertheless dramatic change from 2001, when the U.S. Department of Justice spent months arguing in a case involving an alleged New Jersey mobster that key loggers were “classified information” (PDF) and could not be discussed in open court.

Now, after keystroke-logging spyware has become commonplace, even being marketed to parents as a way to monitor kids’ activities, there’s less reason for secrecy. “There are times when the government tries to use keystroke loggers,” Van Buren acknowledged.

As first reported by CNET, FBI general counsel Valerie Caproni told a congressional committee last week that encryption and lack of ability to conduct wiretaps was becoming a serious problem. “On a regular basis, the government is unable to obtain communications and related data,” she said. But the FBI did not request mandatory backdoors for police.

Also becoming more readily available, if not exactly in common use, is well-designed encryption built into operating systems, including Apple’s FileVault and Microsoft’s BitLocker. PGP announced whole disk encryption for Windows in 2005; it’s also available for OS X.

Howard Cox, assistant deputy chief for the Justice Department’s Computer Crime and Intellectual Property Section, said he did not believe a defendant could be legally forced–upon penalty of contempt charges, for instance–to turn over a passphrase.

“We believe we don’t have the legal authority to force you to turn over your password unless we already know what the data is,” said Cox, who also spoke at RSA. “It’s a form of compulsory testimony that we can’t do… Compelling people to turn over their passwords for the most part is a non-starter.”

In 2009, the Justice Department sought to compel a criminal defendant suspected of having child porn on his Alienware laptop to turn over the passphrase. (A border guard said he opened the defendant’s laptop, accessed the files without a password or passphrase and discovered “thousands of images of adult pornography and animation depicting adult and child pornography.”)

Another option, Cox said, is to ask software and hardware makers for help, especially when searching someone’s house or office and encryption is suspected. “Manufacturers may provide us with assistance,” he said. “We’ve got to make all of those arrangements in advance.” (In a 2008 presentation, Cox reportedly alluded to the Turkish government beating a passhprase out of one of the primary ringleaders in the TJ Maxx credit card theft investigation.)

Sometimes, Van Buren said, there’s no substitute for what’s known as a brute force attack, meaning configuring a program to crack the passphrase by testing all possible combinations. If the phrase is short enough, he said, “there’s a reasonable chance that if I do lower upper and numbers I might be able to figure it out.”

Finding a seven-character password took three days, but because there are 62 likely combinations (26 uppercase letters, 26 lowercase letters, 10 digits), an eight-character password would take 62 times as long. “All of a sudden I’m looking at close to a year to do that,” he said. “That’s not feasible.”

To avoid brute-force attacks, the Secret Service has found that it’s better to seize a computer that’s still turned on with the encrypted volume mounted and the encryption key and passphrase still in memory. “Traditional forensics always said pull the plug,” Van Buren said. “That’s changing. Because of encryption…we need to make sure we do not power the system down before we know what’s actually on it.”

A team of Princeton University and other researchers published a paper in February 2008 that describes how to bypass encryption products by gaining access to the contents of a computer’s RAM–through a mechanism as simple as booting a laptop over a network or from a USB drive–and then scanning for encryption keys.

It seems clear that law enforcement is now doing precisely that. “Our first step is grabbing the volatile memory,” Van Burean said. He provided decryption help in the Albert “Segvec” Gonzalez prosecution, and the leaked HBGary e-mail files show he “went through a Responder Pro class about a year ago”. Responder Pro is a “memory acquisition software utility” that claims to display “passwords in clear text”.

Cox, from the Justice Department’s computer crime section, said “there are certain exploits you can use with peripheral devices that will allow you to get in”. That seems to be a reference to techniques like one Maximillian Dornseif demonstrated in 2004, which showed how to extract the contents of a computer’s memory merely by plugging in an iPod to the Firewire port. A subsequent presentation by “Metlstorm” in 2006 expanded the Firewire attack to Windows-based systems.

And how to make sure that the computer is booted up and turned on? Van Buren said that one technique was to make sure the suspect is logged on, perhaps through an Internet chat, and then send an agent dressed as a UPS driver to the door. Then the hapless computer user is arrested and the contents of his devices are seized.

Father of firewall: Security’s all about attention to detail

newsmaker Marcus J. Ranum is a world-renowned expert and innovator on IT security, whose pragmatic approach is lauded by industry peers. Two decades ago he designed and implemented Digital Equipment Corporation’s (DEC) Secure External Access Link–regarded by many, but not Ranum, as the first commercial firewall.

He has held senior security roles at a variety of high-profile companies in which he has administered the White House e-mail system. He has consulted for many Fortune 500 organizations, and has been a key presenter at countless security events around the world. Ranum resides on a remote farm in Pennsylvania far from the cities and fast Internet. He’d welcome the end in the battle for IT security, even if it meant the end of the industry.

Q: Why did you enter the information security industry? What do you find most interesting about it?
Ranum: I got dragged in quite by accident when my boss at DEC, Fred Avolio, put me in charge of one of the company’s Internet gateways and told me to “build a firewall like Brian Reid and Bill Cheswick’s”–20 years later I suppose you could say I’m still working on that assignment. And, to be honest, I didn’t find anything particularly interesting about computer security; once you understand the strategic problem then it’s all just a lot of attention to detail.

Marcus Ranum 

(Credit: Munir Kotadia/ZDNet Australia)

What I do find most interesting about security is how people react to it: they want to do something dangerous safely and are generally resentful when you tell them that’s not going to work. So I see the whole industry as a vast dialectic between hope and concrete effort on one side, and cynical marketing and wilful ignorance on the other.

What do you find is the most pressing issue in the information security industry and what can be done to fix it?
The most pressing issue in information security is one we’re never likely to do anything about, and that’s achieving reliable software (security is a subset of reliability) on end-point systems. That means operating system design and reliable coding, two things that the trend lines are moving in the opposite direction of right now. Consequently, the current trend is “cloud computing”, which, in effect, is visualizing the mainframe: acknowledging that end-points are badly managed and unreliable and putting data and processes in the hands of professionals who are expected to do a better job maintaining them and making them reliable–and cheap–than departmental IT.

Of course, that’s a pipe dream, because the same practices that brought us unreliable code-mass on the end points are being used to build the aggregated services. The backlash when it’s all revealed to be a pipe dream is going to be expensive and interesting, in that order.

What can be done to fix it? Again, the trend lines are all going the wrong direction–the fix requires technically sophisticated management with healthy scepticism toward marketing claims, good software engineering and a focus on getting the job done right, not getting something that you can’t understand from the lowest bidder. It will correct itself. The industry will re-aggregate into competence centers, which will become more expensive when they realize they have the upper hand, and that will re-trigger the fragmentation to the desktop and department cycle.

To fix things, we’d need to all focus ruthlessly on reliability, which means also quality, and not … “ooo! Shiny thing!”

You’re no fan of blacklisting, yet much of the industry is built on it and it’s the source of a lot of cash. Can you explain your opposition to blacklisting and whether you think change to a dominant whitelisting model is inevitable? What would happen to revenues in the security industry if such a shift happened?
I’m a huge fan of blacklisting! It’s a crucial technology! It just doesn’t answer the question that many people are expecting it to, which is “is this software good?” Blacklisting is the best technique for identifying something, because it can answer not only the question “is this thing bad?” but “what is it?” It seems to be human nature to want to know what was thrown at us, and that’s why people are so intellectually comfortable with signature-based intrusion detection/prevention and signature-based antivirus. It’s easy to implement and it’s easy to understand–and it’s easy to keep selling signature update subscriptions.

When you’ve got companies like Symantec saying that blacklists don’t work, I think it’s an important acknowledgement that a lot of the security industry is just happy to keep churning the money-pump as long as it’s not sucking air. The trend there seems to be reputation–[meaning] “continue to trust someone else’s opinion”–it’s a more flexible approach to building a cloudy and hype-ful dynamic blacklist, but in the long run it’s not going to work any better than static blacklists. By work I mean “solve the malware problem for customers”. If by work you mean “solve the relevance and financial problems for antivirus vendors”, I think it will “work” just fine for a long enough [time] to keep them happy.

Meanwhile, I keep asking IT managers “do you have any idea why you gave a user a computer?” and “if you know why they have a computer, why not configure that computer so that what it can do is what it’s supposed to do and not much else”–where much else means things like “participate in botnets”. I’m constantly baffled by how many IT managers say it’d be hard to enumerate all the software they run. It’s bizarre because knowing the answer to that question is what IT’s job is. If my company gave me a computer so I can do e-mail and edit company documents, it seems pretty simple to imagine that it ought to run some office apps and an e-mail client configured to talk to our IMAP server and maybe nothing else. For a while I was hopeful that the app-store model on increasingly powerful handheld devices would let us do away with the current “bucket of fish guts” approach to desktop security, but it looks like the app stores are going to be a big target and eventually a distribution vehicle for badware.

So, you need blacklists so that you can tell someone “that piece of weird stuff you just tried to run is called Stuxnet” and that’s interesting and useful, but you need the whitelists more, because that’s how you define your notion of what you think your computer should be doing. If you cast the problem in terms of a firewall policy it’s the old default-permit versus default-deny all over again. Default-deny is what the survivors do, and default-permit is for the guys who want to spend all their time doing incident response and forensics. None of this is anything less than completely obvious.

As far as security industry revenues–who cares? Nobody is worrying about the impact that the internal combustion industry has had on the steam-power boilermakers’ industry, are they? In fact, I think it’d be awesome if we could someday dry our hands, put away our tools and say “There, fixed it, now let’s write something fun!” Believe it or not there was a time early in the firewall industry when I thought we’d built all the tools that security would need; it was just a matter of fielding policy-based access control, offline authentication, point-to-point cryptography and then levelling up software quality. But in the late ’90s the lunatics took over the asylum and–well, the results speak for themselves.

You said once that businesses lack the willpower to brand devices as corporate, rather than personal, assets. Must this happen? Are platforms to “secure” bring-your-own devices not enough?
Let me throw that back at you, OK? How would you feel if the U.S. announced that we were putting our ballistic missile systems control into an iPad application and we were going to let the guys in the silos use their personal iPads so we could save a whole bunch of money?

It always depends: it depends what’s at stake, how replaceable it is, how easy it is to clean up an “oopsie” and whether you are really willing to be part of that “oopsie”. Every single journalist who has ever complained that some agency or company leaked a zillion credit cards or patient data or secrets should never ask the question you just asked me.

You should be asking why do they tolerate systems and software that are so bad, so shoddy, so mismanaged that they’ve got no idea what they are doing, yet they allow them to be used to access my bank account? Are you insane?! These problems are inevitable side-effects of poor configuration management, which is poor system management, which means “don’t know how to do IT”.

Yes, I do realize that I am arguing against today’s prevailing trends in IT management.

Do you still equate penetrate and patch to turd polishing? How prevalent is this and is it realistic to expect software vendors to change their attitude to security?
Yes, I do. It’s one thing for a sculptor to say they start with a block of marble and then chip away everything that doesn’t look like an angel, but that doesn’t work for software. You can’t start with the idea that a buggy mass of stuff [will] eventually turn into enterprise-class, failure-proof software by fixing bugs until there aren’t any more. No matter how much polish you put on a turd, it’s still a turd.

The software industry almost understands this–you’ll occasionally see some piece of software get completely re-architected because its original framework became limiting. As pieces of software get more complex and powerful, developers usually resort to things like source-code revision control, unit testing, regression testing, et cetera. Why doesn’t the idea that a security bug is just another bug sink in? If a manager can comprehend that there’s a major cost to an out-of-cycle patch because of some reliability failure, they ought to be able to understand that a security flaw is just a particularly painful out-of-cycle patch with bad publicity attached to it.

The problem is that the software industry is target-locked on time-to-market because that is where the big rewards are–asking them to do anything that might affect time-to-market is asking them to risk being an also-ran. Some of that can be managed by adopting a model of “write a toy version, throw it over the fence, and if it succeeds take the lessons learned and write a real version shortly after”, but I’m afraid that sometimes the toy version becomes the production codebase for a decade. We’ve seen the results of that and they’re not very pretty.

It’s been about six years into the 10 by which you predicted hackers would no longer be portrayed as cool and educating neo-luddite users on security would become a null point. What’s your take of the current climate?
I think that, at least partly, thanks to the spread of malware and botnets, and the professionalization of cybercrime, a lot more “normal people” are less impressed with hacker culture. The “grey hat” community’s commercial interest is pretty clear to just about everyone now, so I think the hacking community has some reputation damage to deal with.

As far as educating neo-luddites, I think I was pretty much completely wrong there. Not wrong that education won’t help, but wrong that the newer generation of executives will have a better grasp of security. From where I sit it looks like it’s actually getting worse.

Which mobile platform will (or do you hope will) win out–the open Android, walled Apple or locked down Blackberry?
I wish they would all go away. Which they inevitably will. The song “Every OS Sucks” sums up my views very nicely. A disclosure: I bought an iPad because it plays movies nicely and doesn’t pretend to be a telephone. I do like the delivery model of “app store” systems for fielding software–it’s much better than letting users install things themselves or worse yet when the system comes bundled with 10,000 pieces of shovel-ware. I’m concerned about code quality, of course: it’s not going to be possible for the app stores to vet code for malware, and I’m not convinced the “walls” in the “walled garden” aren’t made of Swiss cheese.

You once told me privacy is a myth and something held by the privileged few. What is your take on privacy now, where do you think it is heading and what significance will this have?
I think that what I might have said is more that privacy has only ever been for the wealthy and powerful. What we’ve seen lately is the veneer coming off–the U.S. government is consistently and cheerfully trampling on privacy and has pardoned itself and its lackeys for all transgressions. Meanwhile, we see that if you read Sarah Palin’s e-mail you get in trouble, but if you read Joe Average’s e-mail you’re the FBI. Privacy is a privilege of power, because the powerful need it so they can enjoy the fruits of their power without everyone realizing how good they’ve got it.

Meanwhile, the entire population of the planet seems to want to join social-networking Web sites that exist to collect and re-sell marketing information and push ads in their users’ faces, then they complain when they discover that the sites are doing exactly what they were created to do. What else did they expect? I never really cared about privacy, but a few years ago I adopted a strategy of leading a fairly open life. It’s easy to get my phone number and address and e-mail address and to find out where I’ve been and who I’m sleeping with and what and how much I drink or what music I listen to. There are only a few things about my lack of privacy that annoy me and it’s mostly the stupidity of commercial marketing–I get a credit card offer from the same big bank every month. I’ve gotten one from them every month for 15 years. I periodically wonder why it hasn’t sunk in to them that I’m not interested, but I have a big garbage can and it’s their money they’re wasting.

I’m a subscriber of your six dumbest ideas–are there some that you would update?
The piece was originally going to have a few more dumb ideas than it did, but the next one to write about was “ignoring transitive trust“. I wrote that piece while I was stuck in Frankfurt Airport and I was pretty tired and trying to explain why transitive trust makes a mockery out of most of what we see as “Internet security” was just too much for me to attempt. If I’d had more courage I’d have also tackled “cost savings achieved now will continue forever” for the outsourcing and cloud computing fans.

Could you briefly explain why you think cyberwar is BS?
There are several reasons cyberwar is BS: technological, strategic and logistical. The people who are promoting it are either running a snow-job (there’s a lot of money at stake!) or simply don’t understand that warfare is the domain of practicality and cyberwar is just a shiny, impractical toy. Unfortunately, there’s so much money involved that the people who are pushing it simply dismiss rational objections and incite knee-jerk fear responses by painting pictures of burning buildings and national collapse and whatnot.

[See a longer explanation of the cyberwar phenomenon on Ranum’s Rearguard podcast.]

Probably the shortest rebuttal of cyberwar is to point out that it’s only practical if you’re the power that would already expect to win a conventional war–because a lesser power that uses cyberwar against a superpower is going to invite a real-world response, whereas it’s attractive if you already have overwhelming real-world force–but then it’s redundant. Cyberwar proponents often argue by conflating cybercrime, cyberespionage, cyberterror and cyberwar under the rubric of “cyberwar” but they ignore the obvious truth that those activities have different and sometimes competing agendas.

A short cyberwar: “be glad we jacked you up with Stuxnet because otherwise we’d have bombed you”. A shorter cyberwar: “be afraid. give me money”.

This article was first published at ZDNet Australia.

Rapid tech adoption overwhelming security staff

Information security professionals are overwhelmed by the rapid deployment of new technologies in the workplace, potentially putting government agencies, businesses and consumers at risk, reveals a new study released Friday.

According to the 2011 (ISC)2 Global Information Security Workforce Study (GISWS), IT security personnel are challenged by the proliferation of mobile devices as well as the rise of cloud computing and social networking. Many of the professionals admitted they needed more training to manage these technologies, yet, reported that such tools were already deployed without security in mind.

Conducted by Frost & Sullivan in the second half of 2010, the study surveyed over 10,400 IT security professionals from the public and private sectors. U.S.-based respondents made up 61 percent of total respondents, while 22.5 percent were from Europe, Middle East and Africa. Respondents in Asia accounted for 16.5 percent of the sample pool.

Mobile “single most dangerous threat”
Organizations polled ranked mobile devices as No. 2 security concern, after application vulnerabilities. At the same time, almost 70 percent of respondents said their companies had in place policies and technologies such as encryption and mobile VPN (virtual private network) to meet the security challenges posed by portable devices.

In the report, Frost & Sullivan said mobile security could be the “single most dangerous threat to organizations for the foreseeable future”.

Security professionals, on the other hand, appeared more lax in their approach toward social media, treating it as a personal platform and doing little to manage it, reported the analyst firm. Less than half, or 44 percent, indicated their companies had policies in place to control access to social media sites.

Frost & Sullivan said it was “disappointed” that 28 percent of organizations globally had no restrictions on the use of social media.

Robert Ayoub, the research firm’s global program director for information security and author of the report, said in a statement that the pressure to “secure too much” and a resulting skills gap increasingly put a strain on IT security professionals. This, in turn, creates risk for organizations across the world in the coming years.

“The good news from this study is that information security professionals finally have management support and are being relied upon and compensated for the security of the most mission-critical data and systems within an organization,” Ayoub said. “The bad news is that they are being asked to do too much, with little time left to enhance their skills to meet the latest security threats and business demands.”

He added: “Information security professionals are stretched thin, and like a series of small leaks in a dam, the current overstretched workforce may show signs of strain.”

Manpower, skills key to risk management
The risks, according to Ayoub, can be mitigated by attracting quality talent to the field and investing in professional development for emerging skills.

The need for skills improvement was especially evident in the area of cloud computing–over 70 percent of survey respondents reported the need for new skills to properly secure cloud-based technologies.

However, nearly two-third of respondents in the (ISC)2 study indicated that they did not expect any budget increases this year for IT security personnel and training.

In terms of manpower growth, Frost & Sullivan estimates there are 2.28 million information security professionals globally as of 2010, of whom around 750,000 are based in the Asia-Pacific region. The analyst firm expects the region’s demand for security professionals to increase at a compound annual growth rate of 11.9 percent to over 1.3 million by 2015.

Ayoub noted: “As the study finds, these solutions are underway but the question remains whether enough new professionals and training will come soon enough to keep global critical infrastructures in the private and public sectors protected.”

SA chief wants to protect ‘critical’ private networks

SAN FRANCISCO–The head of the National Security Agency (NSA) said today that the U.S. military should have the authority to defend “critical networks” from malware and other disruptions.

Gen. Keith Alexander, who is also the head of the Pentagon’s U.S. Cyber Command, said at the RSA Conference here that the NSA’s “active defenses” designed to defend military networks should be extended to civilian government agencies, and then key private-sector networks as well.

“I believe we have the talent to build a cyber-secure capability that protects our civil liberties and our privacy,” Alexander said.

Alexander’s comments come only two days after William Lynn, the deputy secretary of defense, offered the same suggestion. In an essay last year, Lynn likened active defenses to a cross between a “sentry” and a “sharpshooter” that can also “hunt within” a network for malicious code or an intruder who managed to penetrate the network’s perimeter.

But the power to monitor civilian networks for bad behavior includes the ability to monitor in general, and it was the NSA that ran the controversial warrantless wiretapping program under the Bush administration. Concerns about privacy are likely to turn on the details, including the extent of the military’s direct involvement, and whether Web sites like Google.com and Hotmail.com could be considered “critical” or the term would only be applied to facilities like the Hoover Dam.

Alexander offered little in the way of specifics today. “We need to continue to refine the roles of government and the private sector in securing this nation’s critical networks,” he said. “How do we extend this secure zone, if you will? How do we help protect the critical infrastructure, key resources?”

At the moment, the Department of Homeland Security (DHS) has primary responsibility for protecting critical infrastructure. A presidential directive (HSPD 7) says the department will “serve as a focal point for the security of cyberspace”. During an appearance at RSA two years ago, Alexander stressed that “we do not want to run cybersecurity for the U.S. government.”

That was then. After Cyber Command was created–following reports of a power struggle between DHS and the NSA–it moved quickly to consolidate its authority. An October 2010 memorandum of agreement (PDF) between the two agencies says they agree to “provide mutually beneficial logistical and operational support” to one another.

Senators Joseph Lieberman (I-Conn.) and Susan Collins (R-Maine) recently pledged to reintroduce a controversial bill handing President Obama power over privately owned computer systems during a “national cyberemergency,” with limited judicial review. It’s been called an Internet “kill switch” bill, especially after Egypt did just that.

Alexander didn’t address that point. “The intent would be: let’s build how we can do this with DOD, show we can extend that to the government, and then to key critical infrastructure,” he said.

Fighting spam and scams on Twitter

SAN FRANCISCO–Twitter presents a relatively new frontier for spammers, malware creators, and all around bad guys, which in turn has created the opportunity for security researchers and vendors alike to try to figure out, and put a stop to, their efforts.

One company that’s trying to get a handle on the size of the problem, and on ways to fight it, is Barracuda Networks. During a talk at the RSA security conference here, which wraps up Friday, Barracuda outlined some of the research it has been doing in this area over the past two years.

Paul Judge, chief research officer and vice president of cloud services for Barracuda, noted that what makes Twitter a particularly attractive target is that it is both a social network and a search engine. This lets scammers place their wares on a public feed to reach a list of followers, as well as seek new eyeballs by making use of trending keywords to have their wares appear in Twitter search results.

But who, you’re wondering, would follow a scammer on Twitter? It’s more common than you’d think, said Barracuda research scientist Daniel Peck. One example the company tracked was Download-Heaven, a site that was using a Twitter account to push links to hosted shareware filled with malware and Trojans.

Download-Heaven had 445 followers while following only one account itself. Peck said the scammers were following other Twitter users as a way of getting them to return the favor and follow Download-Heaven. Then the scammers would simply unfollow those users while leaving them to continue receiving its updates, including links to malware.

Barracuda looked for that sort of imbalance as it tracked a raw stream of data from Twitter. It also looked for accounts that had been unfollowed by a lot of users over time; such accounts have often been recognized by other Twitter users as bad news. Finally, Barracuda tried to figure out the behaviors of typical users to see if it could put together additional filters that would spot users who were up to no good.

The result was a reputation system that looked at the Twitter public stream (through its API), as well as an extra 20,000 queries per hour outside of the normal public stream. The test ran for two years and evaluated tweet-to-follower ratios as well as the content of what users were sharing. What Barracuda found was that just 43 percent of Twitter users could be classified as “true”. These were users that had more than 10 followers, friends, and tweets. That was compared with the other 57 percent of the network, which fell into a bucket of questionables.

By analyzing the flow of accounts, Barracuda was also able to create a “crime rate”–the percentage of accounts created per month that end up getting suspended by Twitter. This number would swing wildly based on real-world events, such as Oprah joining the network, or the World Cup kicking into gear, which would bring in big swells of new Twitter users, and, in turn, flocks of scammers.

These topical items were another area Barracuda focused on during the test. Much like trying to game conventional search engines to get new eyeballs, scammers were adding topic tags and/or popular words and phrases to tweets to get them to show up in the “Trends” field on Twitter pages and higher up on Twitter’s search results pages. To track how widespread this practice was, Barracuda began grabbing popular search terms on Twitter every hour, and doing searches for them on the site. It would then look at the tweets that turned up, follow any included links, and look for malicious code on the resulting Web sites.

What they found, after five months of searching for popular words and phrases on Twitter as well as on more traditional search engines like Google, Yahoo and Bing, was a total of 34,627 samples of malware. Twitter accounted for 8 percent of this total, with the other search engines logging the remainder.

“It’s interesting, because we’ve been doing this work for probably nine months of a year now, and the last time we really examined it and looked back on this, it charted very differently,” Judge said. “About 69 percent of the malware that we found was on Google at the time, only 1 percent was on Twitter.”

“A couple things happened,” Judge continued. “Google didn’t necessarily get better–there was more malware–basically Bing, Twitter, and Yahoo got worse. So, as the amount of malware increased, Google pretty much stayed steady with the amount of malware that was found there, but the other engines we started to see become a little more equal opportunity.”

To Twitter’s credit, the company has made several efforts to keep this malware at bay. Back in March of last year, it began routing links through a filter that scans for malware and keeps sullied links from being posted. It also employed its own link-shortening service that similarly vets links. And the company transitioned to using OAuth, which lets users authenticate their credentials without providing a username or password, potentially keeping users from having their credentials hijacked by rogue third-party applications.

Judge closed by noting that Barracuda had put together its own tool that can help users see if they’ve accidentally befriended one of these spammy or scammy users, or posted one of their links. The free Profile Protector scans both your Facebook and Twitter profiles and identifies users that are on the company’s watch list.

FBI: We’re not demanding encryption backdoors

The FBI said today that it’s not calling for restrictions on encryption without backdoors for law enforcement.

FBI general counsel Valerie Caproni told a congressional committee that the bureau’s push for expanded Internet wiretapping authority doesn’t mean giving law enforcement a master key to encrypted communications, an apparent retreat from her position last fall.

“No one’s suggesting that Congress should re-enter the encryption battles of the late 1990s,” Caproni said. There’s no need to “talk about encryption keys, escrowed keys, and the like–that’s not what this is all about”.

Instead, she said, discussions should focus on requiring that communication providers and Web sites have legally mandated procedures to divulge unencrypted data in their possession.

The FBI says that because of the rise of Web-based e-mail and social networks, it’s “increasingly unable” to conduct certain types of surveillance that would be possible on cellular and traditional telephones. Any solution, it says, should include a way for police armed with wiretap orders to conduct surveillance of “Web-based e-mail, social-networking sites, and peer-to-peer communications technology”.

Caproni tried to distance the FBI from its stance a decade ago, when it was in the forefront of trying to ban secure encryption products that are, in theory, unbreakable by police or intelligence agencies.

“We are very concerned, as this committee is, about the encryption situation, particularly as it relates to fighting crime and fighting terrorism,” then FBI director Louis Freeh told the Senate Judiciary committee in September 1998. “Not just bin Laden, but many other people who work against us in the area of terrorism, are becoming sophisticated enough to equip themselves with encryption devices.”

In response to lobbying from the FBI, a House committee in 1997 approved a bill that would have banned the manufacture, distribution, or import of any encryption product that did not include a backdoor for the federal government. The full House never voted on that measure. (See related transcript.)

Even after today’s hearing ended, it wasn’t immediately clear whether the members of the House Judiciary crime subcommittee would seek to expand wiretapping laws as a result.

Rep. Bobby Scott, D-Va., said that the panel’s members received a secret briefing last week from the FBI, but that the bureau should make its arguments in public. “It is critical that we discuss this issue in as public a matter as possible,” he said. It’s “ironic to tell the American people that their privacy rights may be jeopardized because of discussions held in secret”.

Rep. John Conyers, D-Mich., said “to me this is a question of building backdoors into systems…I believe that legislatively forcing telecommunications providers into building backdoors into systems will actually make us less safe and less secure.”

That was echoed by Susan Landau, a computer scientist at Harvard University’s Radcliffe Institute for Advanced Study, who said “there aren’t concrete suggestions on the table…I don’t quite understand what the FBI is pushing for.”

Caproni said her appearance before the panel was designed to highlight the problems, not call for specific legislation. But, she added, “it’s something that’s being actively discussed in the administration.”

Under a 1994 federal law called the Communications Assistance for Law Enforcement Act, or CALEA, telecommunications carriers are required to build in backdoors into their networks to assist police with authorized interception of conversations and “call-identifying information”.

As CNET was the first to report in 2003, representatives of the FBI’s Electronic Surveillance Technology Section in Chantilly, Va., began quietly lobbying the FCC to force broadband providers to provide more-efficient, standardized surveillance facilities. The Federal Communications Commission approved that requirement a year later, sweeping in Internet phone companies that tie into the existing telecommunications system. It was upheld in 2006 by a federal appeals court.

But the FCC never granted the FBI’s request to rewrite CALEA to cover instant messaging and VoIP programs that are not “managed”–meaning peer-to-peer programs like Apple’s Facetime, iChat/AIM, Gmail’s video chat, and Xbox Live’s in-game chat that do not use the public telephone network.

Also not covered by CALEA are e-mail services or social-networking sites, although they must comply with a wiretap order like any other business or face criminal charges. The difference is that those companies don’t have to engineer their systems in advance to make them easily wiretappable.

Cybercrime costs US$43B a year

Cybercrime is costing the United Kingdom 27 billion pounds (US$43.5 billion) a year, according to the government, which has pledged to work with businesses to combat the problem.

The total figure covers 21 billion pounds (US$33.8 billion) from losses suffered by businesses, 3.1 billion pounds (US$5 billion) by citizens and 2.2 billion pounds (US$3.5 billion) by government, the Office of Cyber Security and Information Assurance (Ocsia) said in a report summary published on Thursday. It did not account for the other 700 million pounds (US$1.1 billion).

The report, produced by Ocsia and BAE Systems security subsidiary Detica, marks the first time the government has made a public estimate of cybercrime costs. At a press launch event, security minister Baroness Pauline Neville-Jones emphasized that while the figures are an estimate, they still give an indication of the scale of economic loss suffered by the U.K.

Read more of “Cybercrime costs the UK £27bn a year” at ZDNet UK.

Securing the smart grid no small task

SAN FRANCISCO–The road to a secure smart grid is still being built. Can it be finished in time to keep next-generation threats at bay?

That question was left largely unanswered during a panel discussion on “securing the smart grid” at the RSA security conference taking place here this week.

The smart grid promises to bring a number of benefits to both consumers and utilities in the coming years–things like intelligent off-peak appliance use; real-time metering; and customer education on efficiency and conservation. But bringing that kind of experience to fruition is still a work in progress, with some of the blame being placed on utility companies for not being agile enough when it comes to security, interconnectivity, and the like.

According to specialists, the problem is (and continues to be) huge fragmentation among the power companies, something that on its own is issue enough, but as the panelists lamented, the same problem threatens the technologies these companies plan to roll out.

“In my experience, utility companies are very siloed,” said Mike Echols, the program manager for critical-infrastructure protection at the Salt River Project in Arizona. “Each of those silos has its own IT groups, and there’s a reason for that. They don’t want to converge because in typical IT that’s considered a risk.”

In the electricity industry that risk has become more apparent after what happened last year with Stuxnet, the computer virus that targeted homogenized industrial systems and represents the first in a wave of expected attacks aimed at infrastructure. As the grid gets more intertwined with consumer electronics and home area networks, the likelihood of a wider range of targets is expected to increase.

So what would it take to make utilities less fractured from an IT perspective? Echols suggested that IT security be put higher on the ladder of the corporate structure of these utility companies, so that important decisions trickled down into the subgroups. “Cybersecurity tends not to be in a leadership position,” he said, while noting that this is beginning to change with increased compliance, which is driving changes in the power industry.

Another big issue, as noted by panelist Gib Sorebo, chief cybersecurity technologist for SAIC, is that outside security companies looking to do business with the utilities first need to gain a deep understanding of power companies before trying to tackle security challenges.

“We have to know how important it is for us to understand how everyone does their jobs, what the concerns are, and what the potential impact is depending upon what kind of events take place–and to show that communication,” Sorebo said. “You see that same kind of thing happening in banking.”

One question that lingers is whether a system that’s simply more secure will be able to handle evolving threats. Heath Thompson, the CTO at Landis & Gyr, said the industry hadn’t come to grips with that yet but that there were the beginnings of a foundation for stronger security across the entire ecosystem. To attack new threats head on, however, the systems need to be readily adjustable with things like upgradeable firmware and infrastructure.

Ultimately though, making the grid too connected from a technology perspective could do just as much harm as good, which is why the right safeguards have to be put in place. “The smart grid can do a lot of wonderful things in terms of automation and finding events quickly,” Sorebo said. “But it can also automate disaster, and that’s something that more and more people obviously need to focus on.”

S’pore sets data protection law for 2012

SINGAPORE–It took several years in the making but the nation is now ready to take another step closer to introducing a data protection regime, with the Singapore government announcing plans to put forth legislation for debate in parliament early-2012.

The proposed laws will provide a “baseline standard for data protection in Singapore”, Lui Tuck Yew, minister for the Information, Communication and the Arts, indicated on Monday in a written response to a parliamentary question.

According to Lui, a review–initiated five years ago–to assess the need for a data protection system and the appropriate model for the country, has now been completed.

The government, he said, “concluded it would be in Singapore’s overall interests” to put in place such a regime, designed to “protect individuals’ personal data against unauthorized use and disclosure for profit”.

“The proposed law is intended to curb excessive and unnecessary collection of individuals’ personal data by businesses, and include requirements such as obtaining the consent of individuals to disclose their personal information,” the minister said.

“It will also enhance Singapore’s overall competitiveness and strengthen our position as a trusted hub for businesses and a choice location for global data management and processing services.”

As part of the data protection regime, a Data Protection Council is expected to be established to oversee the implementation of the legislation, Lui added.

Meanwhile, the country’s ICT regulator, the Infocomm Development Authority of Singapore (IDA), will engage relevant stakeholders in further consultation and work to address concerns from the “public, private and people sectors”.

Bryan Tan, director at Keystone Law, pointed out that businesses must “start making preparations for the arrival of the legislation”. To prepare for the data protection regime, they need to reexamine their databases and data collection practices, the Singapore-based lawyer said in a circular Tuesday.

“Businesses that are unprepared may have to pay a heavy price,” he warned.

HP, VMware plan further product integration

HP and VMware plan to develop and market a range of intrusion prevention security products, in a collaboration that builds on existing work.

The hardware maker and virtualisation company said on Tuesday that they aim to tailor HP’s TippingPoint Intrusion Prevention System (IPS) range of products to fit VMware‘s virtualisation security vShield and management vCloud Director packages.

The companies said the integration will allow security management to extend across physical and virtual IT stacks, and allow IT professionals to automate “the processes of scanning, identifying threats and blocking attacks” across these areas, HP said in a statement.

Read more of “HP and VMware plan further product integration” at ZDNet UK.

Microsoft looks to healthcare for improved security

SAN FRANCISCO–Microsoft wants to make tomorrow’s tech-security world work a lot like tomorrow’s healthcare industry.

While the comparison has long been made in the security industry, with threats like “viruses”, Scott Charney, corporate vice president in Microsoft’s Trustworthy Computing group, noted that the response to those problems has fallen short in areas where healthcare has proved more agile.

“Every year there’s a new version of the flu,” Charney said to attendees of this year’s RSA Conference. “There was a time before SARS, and a time before H1N1. And when those threats appeared, [the healthcare industry] didn’t scramble to know what to do, they already had defenses.”

Microsoft’s multistep plan to put a similar safety net in place approaches the problems from both a security and a data ownership position.

Charney said one option is cryptographically signed health certificates. These would be provided for users who had gone through various security check protocols to prove their machine was not dripping with malware before getting on something like a bank’s site or a local intranet.

The second aspect of this measure would be alerting people to possible security holes ahead of when their machines have been compromised. That way, they could put fixes into place before encountering attack scenarios, as well as to avoid compatibility issues with sites and services.

Charney also highlighted the importance of making sure whatever lockdown system went into place for compromised machines would not go too far, so critical services like VoIP weren’t being sealed off as well. After all, Charney said, nobody wants to be kept from calling 911 during a heart attack because their computer needs to download software updates.

Symantec brings reputation security to the enterprise

SAN FRANCISCO–Security giant Symantec is trying to give companies a better way to determine how trustworthy files are.

At the RSA Conference here, Symantec CEO Enrique Salem outlined the new reputation-based security feature built into the company’s new Endpoint Protection 12, client-side security software that gives files a score based on the scanning of 2.5 billion files the company keeps track of in its cloud-based database.

Dubbed the “Insight Reputation System”, the feature looks at files that have been downloaded from the Web and gives each one a score based on risk. This is based on what kinds of things the file does, as well as who it’s from.

“The idea of a blacklisting approach is no longer going to be effective, and Internet Protocol-based recognition where we track IP addresses is not good enough,” Salem said. “We need real-time, contextual tracking that look at a series of attributes; things like file age, download source, prevalence, and brings all those things together.”

The tool for that, Salem said, is Endpoint Protection 12, which the company claims is the only reputation-based system that’s context-aware. The new tool, which is the first major update to the Endpoint Protection suite in three years, will be released in April.

Salem also went into specifics about how it was becoming increasingly important to identify threats at the point of download given the consumerization of IT and the proliferation of consumer devices within businesses–both things that have made it increasingly difficult to keep threats at bay, and represent the new battleground for threat activity itself.

“It wasn’t that long ago that you as security professionals had control,” Salem said. “You had control of the desktop, you had control of the database, you had control of the applications, you had control of the servers, and to some extent, you even had control of the users.”

The problem, Salem said, was that control had been toppled with new devices, and new ways of doing business. “Now what’s happening is that those days are over, because all kinds of devices are coming into your office: USB drives, notebooks, and many of them aren’t your devices. They’re your partners, they’re people that are bringing them into your environment,” Salem said. “And what are they doing? They’re accessing corporate e-mail, they’re logging into their Facebook pages, and their Twitter accounts.”

Symantec’s solution to get above the problem is a new initiative called O3, which Salem compared to the Earth’s ozone layer, protecting the surface from outside forces. O3 is made up of three security layers:

1. A rules engine for enforcing the information specific devices can access from where.
2. A protection enforcement layer that determines what employees from what devices can access the information.
3. A compliance/monitoring layer for access and understanding of what policies are being enforced.

“That’s our approach, that’s our vision for what has to be done. It has to be a layer above the clouds,” Salem said.

US Defense Dept. proposes armoring civilian networks

SAN FRANCISCO– A top Defense Department official said today that the United States military should “extend” a technological shield used to protect its own networks to important private sector computers as well, which could sweep in portions of the Internet and raise civil liberty concerns.

William Lynn, the deputy secretary of defense, proposed at the RSA Conference extending “the high level of protection afforded by active defenses to private networks that operate infrastructure” that’s crucial to the military or the U.S. economy.

What Lynn refers to as “active defenses” were pioneered by the National Security Agency. In an essay last year, Lynn likened them to a cross between a “sentry” and a “sharpshooter” that can also “hunt within” a network for malicious code or an intruder who managed to penetrate the network’s perimeter.

But the power to monitor civilian networks for bad behavior includes the ability to monitor in general, and it was the NSA that also pioneered a controversial warrantless wiretapping program under the Bush administration. NSA director Keith Alexander was named head of the U.S. Cyber Command last year, an idea that Lynn had championed.

Concerns about privacy are likely to turn on the details, including whether the military merely provides source code for defensive and offensive technologies–or if it includes actual authority and oversight. Another open question is whether Web sites like Google.com and Hotmail.com could be considered “critical infrastructure”, or the definition would be narrowed to facilities like power plants.

Lynn, who has been speaking frequently about cybersecurity threats in the last year, didn’t elaborate. “Securing military networks will matter little if the power grid goes down or the rest of the government stops functioning,” he said.

That echoes comments made by Sens. Joseph Lieberman (I-Conn.) and Susan Collins (R-Maine), who have pledged to reintroduce a controversial bill handing President Obama power over privately owned computer systems during a “national cyberemergency”, with limited judicial review. It’s been called an Internet “kill switch” bill, especially after Egypt did just that.

At the moment, the Pentagon is responsible only for defending .mil computers, and the Department of Homeland Security has responsibility for other governmental networks. Lynn said the military (and remember, the NSA is part of the Defense Department) is aiding DHS, much like it provides troops and helicopters to aid after a natural disaster

“The military provides support to DHS in the cyber domain,” Lynn said. Like equipment and troops provided to FEMA, he added, military “cyber” support will be “available to civilian leaders to help protect the networks that support government operations and critical infrastructure…These resources will be under civilian control and be used according to civilian laws.”

“Through classified threat-based information and the technology we have developed to employ a network defense,” he said, “we can significantly increase the effectiveness of cybersecurity practices that industry is carrying out.”

Homeland Security hinted at this during an interview with ZDNet Asia’s sister site CNET last year at the RSA conference. The department said at the time that it might eventually extend its Einstein 3 technology, which is designed to detect and prevent in-progress cyberattacks by sharing information with the NSA, to networks operated by the private sector.

Stuxnet expert: other sites were hit but Natanz was true target

Stuxnet may have hit different organizations, but its main target was still the Natanz nuclear enrichment plant in Iran, an expert who has analyzed the code said Monday.

Ralph Langner, who has been analyzing the code used in the complicated Stuxnet worm that used a Windows hole to target industrial control systems used in gas pipelines and power plants last year and possibly earlier, said the initial distribution of Stuxnet was limited to a few key installations.

“My bet is that one of the infected sites is Kalaye Electric,” he wrote in an e-mail to ZDNet Asia’s sister site CNET. “Again, we don’t have evidence for this, but this is how we would launch the attack–infecting a handful of key contractors with access to Natanz.”

Langner was responding to a report (PDF) released late last week by Symantec that said five different organizations in Iran were targeted by a variant of Stuxnet, several of them more than once, dating back to June 2009.

“We have a total of 3,280 unique samples representing approximately 12,000 infections,” the Symantec researchers write in a blog post about the report. “While this is only a percentage of all known infections, we were able to learn some interesting aspects of how Stuxnet spread and where it was targeted.”

The Symantec researchers, who have made other important discoveries in the quest to de-code Stuxnet, don’t name the organizations they suspect as targets. As of September 2010, they had estimated there were more than 100,000 infected hosts, nearly 60 percent of them in Iran.

“Unfortunately Symantec doesn’t tell the geographic location of the targeted organizations,” Langner said. “My theory is that not all may be in Iran since chances are that at least one significant contractor is a foreign organization (this is something we are researching presently).”

Langner said he and partners have been able to match data structures from one of the parts of the multi-pronged Stuxnet attack code with the centrifuge cascade structures in Natanz.

“The significance of this is that it is now 100 percent clear that Stuxnet is about Natanz, and Natanz only,” he said. “Further evidence (that matches with the recent discoveries of Symantec) suggests that Stuxnet was designed as a long-term attack with the intention not only to destroy centrifuges but also to lower the output of enriched uranium.”

Langner, based in Germany, offers more technical details of Stuxnet on his blog.

Symantec and Intel collaborate on security

Symantec and Intel have worked together to embed two-factor authentication technology into the hardware of second-generation Intel Core and Core VPro processors.

The work will integrate Symantec‘s VeriSign Identity Protection (VIP) cloud-based security product with Intel’s Identity Protection Technology (IPT), the security company announced last Wednesday.

“By synchronizing VIP with the Intel chipset, we have created the first ever strong authentication credential that you will never see but will always have in your PC,” Atri Chatterjee, vice president of User Authentication at Symantec, said in a statement. “The combination of our proven VIP service with Intel IPT provides users with a new level of ‘built-in’ strong authentication.”

Read more of “Symantec and Intel collaborate on security” at ZDNet UK.

Facebook scams aplenty

With Valentine’s Day round the corner, cybercriminals are once again “cashing in” on the commercialization of the event, hoping to scam unsuspecting Facebook users.

A new entry on Sophos’ Naked Security blog warned that rogue apps with names such as Valentine’s Day and Special Valentine have been making rounds in the social media site, tricking users to involve their friends in the scam.

Senior technology consultant Graham Cluley said the modus operandi of these apps was to get users to click on the splash screen, which would then display a teaser, claiming it would send a poem to the selected friends.

But what the apps are really after, are personal information of users who unknowingly “Allow” them access, warned Cluley. The apps would then post messages on the user’s wall, luring his or her friends to complete an online survey which was disguised as a “Facebook Anti-Spam Verification” dialog box. The scammers earn commission for every completed survey.

The security expert also cautioned that in the past, cybercriminals are known to have sent rogue Valentine’s Day e-cards to spread virus on computers, hence called for users not to let their guard down.

Cheap spam tool
Separately, Symantec engineers have detected a popular viral Facebook application toolkit known as NeoApp that allows one to create applications for the social network. The toolkit guides the ‘developer’ to, for example, place links to funny videos and where to put the survey links in order to maximize cashback.

Once a user installs the applications created with the toolkit, the cybercriminal can send messages to unsuspecting users and friends through statistic pages and easy-to-use templates, the security vendor warned in a blog post.

With the app priced at US$50 or less, it “pretty much allows anyone, even those without coding skills, to create a fast-spreading viral message on Facebook”, Symantec’s Candid Wueest said.

According to him, the app will also have access to affected user’s private data, such as personal e-mail address, and “administrators” controlling the app will be able to send convincing spam mail.

Wuesst added that the app itself and what it does are against the usage policy of Facebook.

He advised that there is no need to install an application just to see images, and users of the social media site should always excercise vigilance when an app requests access to personal information.

McAfee: Data theft attacks besiege oil industry

For years, companies in the oil and energy industry have been the victims of attempts to steal e-mail and other sensitive information from hackers believed to be in China, according to a new report from McAfee.

The attacks, to which McAfee gave the sinister name “Night Dragon”, penetrated company networks through Web servers, compromised desktop computers, bypassed safeguards by misusing administrative credentials, and used remote administration tools to obtain the information, the security firm said Thursday. McAfee and other security companies now have identified the method and can provide a defense.

“Well-coordinated, targeted attacks such as Night Dragon, orchestrated by a growing group of malicious attackers committed to their targets, are rapidly on the rise. These targets have now moved beyond the defense industrial base, government, and military computers to include global corporate and commercial targets,” McAfee said in a white paper (PDF) published today.

And the attack was at least partially successful, McAfee said. “Files of interest focused on operational oil and gas field production systems and financial documents related to field exploration and bidding that were later copied from the compromised hosts or via extranet servers.

“In some cases, the files were copied to and downloaded from company Web servers by the attackers. In certain cases, the attackers collected data from SCADA systems,” the supervisory control and data acquisition systems that control and monitor industrial processes.”

McAfee didn’t reveal details about what SCADA data was involved, but it’s a potentially serious matter: such systems are at the operational heart of everything from oil pipelines and refineries to factories and electrical power distribution networks.

McAfee told The Wall Street Journal that the attacks appeared to be purely about espionage, not sabotage. The latter possibility has become a more vivid fear with the Stuxnet attack that apparently damaged Iranian nuclear operations. China is a particular concern: it’s a rising industrial power that Google has implicated in attempts to crack its own network and obtain sensitive information.

McAfee notified the FBI of the Night Dragon attacks, and the FBI is investigating, the Journal reported.

Several Night Dragon attacks were launched in November 2009, McAfee CTO George Kurtz said in a blog post, but attacks have been going on for at least two years and likely as long as four.

“We have strong evidence suggesting that the attackers were based in China,” Kurtz said. “The tools, techniques, and network activities used in these attacks originate primarily in China. These tools are widely available on the Chinese Web forums and tend to be used extensively by Chinese hacker groups.”

The attacks themselves used a variety of methods that, although described as “relatively unsophisticated”, were nonetheless effective.

First came an attack to compromise a Web server that then became a host for a variety of hacking tools that could probe the company’s internal network. Password cracking and other tools were used to gain access to PCs and servers. Remote administration software, including one called zwShell, let attackers control compromised Windows PCs to gather more data and push the attack toward more sensitive areas.

An appendix of the white paper offers more details on the Chinese connection:

While we believe many actors have participated in these attacks, we have been able to identify one individual who has provided the crucial C&C infrastructure to the attackers–this individual is based in Heze City, Shandong Province, China. Although we don’t believe this individual is the mastermind behind these attacks, it is likely this person is aware or has information that can help identify at least some of the individuals, groups, or organizations responsible for these intrusions.

The individual runs a company that, according to the company’s advertisements, provides “Hosted Servers in the U.S. with no records kept” for as little as 68 RMB (US$10) per year for 100 MB of space. The company’s U.S.-based leased servers have been used to host the zwShell C&C [command and control] application that controlled machines across the victim companies.

Beyond the connection to the hosting services reseller operation, there is other evidence indicating that the attackers were of Chinese origin. Beyond the curious use of the “zw.china” password that unlocks the operation of the zwShell C&C Trojan, McAfee has determined that all of the identified data exfiltration activity occurred from Beijing-based IP [Internet Protocol] addresses and operated inside the victim companies weekdays from 9:00 a.m. to 5:00 p.m. Beijing time, which also suggests that the involved individuals were “company men” working on a regular job, rather than freelance or unprofessional hackers. In addition, the attackers employed hacking tools of Chinese origin and that are prevalent on Chinese underground hacking forums. These included Hookmsgina and WinlogonHack, tools that intercept Windows logon requests and hijack usernames and passwords…

Although it is possible that all of these indicators are an elaborate red-herring operation designed to pin the blame for the attacks on Chinese hackers, we believe this to be highly unlikely. Further, it is unclear who would have the motivation to go to these extraordinary lengths to place the blame for these attacks on someone else.

Researchers demo iPhone passwords hack

A German research firm has demonstrated how passwords stored on an iPhone can be retrieved in less than six minutes without needing to know the passcode.

Researchers from German engineering and research firm Fraunhofer tested the hack on an iPhone 4 and iPad 3G running iOS 4.2.1 and found that it was possible to access a range of passwords stored on the device, including: MobileMe, Google Mail as a Microsoft Exchange account, Microsoft Exchange email accounts, VPN logins and Wi-Fi network credentials.

The researchers said that the hack was relatively easy to perform and used freely available tools. However, they did have to jailbreak the device and install an SSH server in order to access the phone and copy the keychain access script that allows access to the stored information.

Read more of “Researchers demo iPhone passwords hack” at ZDNet UK.

Major Aust banks expose credit card data

Australia’s biggest banks are posting credit card numbers in clear view on mailed customer statements in a direct violation of credit card security regulations.

Placing numbers where any mail thief could grab them is a fundamental breach of the troubled Payment Card Industry Card Data Security Standard (PCI DSS), according to sources in the industry.

The industry standard, drafted by card issuers Visa, MasterCard and American Express and enforced by banks, is a series of security rules to which any business dealing with credit card transactions must adhere.

The standard is a collaborative industry effort to reduce financial fraud by mandating baseline security measures that essentially must accompany any credit card transaction. A call center operator, for example, would be required to destroy a paper note if it was used to temporarily jot down a credit card number, while a Web site that stores transaction information must ensure it is adequately secure.

Non-compliant large businesses–or tier 1 organizations bound by strict rules–face hundreds of thousands of dollars in fines, and risk losing their ability to process credit cards. The fines scale according to the number of credit card transactions processed.

But St George and the Commonwealth Bank have breached rule 101 of the standard by sending out potentially millions of paper statements to letterboxes that clearly detail credit card numbers in full.

The credit card numbers are listed as an account reference, and match that shown on cards number-for-number.

The breach has been known to card issuers for years, but they have failed to push the banks to change their practice.

Sources within the issuers working with PCI DSS compliance say they want the banks to truncate, or scramble, the numbers but they have since received a cold response.

Commonwealth Bank said that it was considering this as an overall security issue, but internal and external assessments led it to believe that it was compliant with the PCI DSS standard.

St George had not responded at the time of writing.

ANZ Bank has truncated the last four digits of its account numbers detailed on paper statements so they do not match Visa and MasterCard credit cards.

The bank said it made the change in 2001 during a “large investment” to improve credit card security. Its customers use a single account number for all dealings with the bank.

IP Payments director Mark Lewis said the banks practised double standards by allegedly ignoring the PCI DSS breach while enforcing the regulations on merchants.

“The banks have been beating their drum that everyone should be PCI [DSS] compliant when the standard came into effect. It is hypocritical,” Lewis said. His company offers PCI DSS compliance services, which includes means to truncate credit card numbers as they appear on printed statements.

“The systems are so old that changing those numbers would be a nightmare. At the end of the day, these systems are 30 years old, much older than PCI [DSS], and the banks are struggling to keep them compliant.” Yet he didn’t think banks could rest on that excuse.

While the paper statements omit credit card expiry dates or Card Security Value numbers, the former can be simply guessed or ascertained through social engineering, according to PCI DSS experts.

Since credit cards expire inside of four years, a fraudster can use a process of elimination to determine the date. They need only enter the number associated with each month over that period into a Web site until one works.

“It is potentially a huge risk,” Lewis said. “The volume of numbers going out if someone was to cotton on to it would make it an ideal target.” He said a criminal would attempt to intercept the statements, by exploiting potential vulnerabilities in the production and distribution process.

Only some online and telephone-based payment systems require the Card Security Value number located on the back of credit cards. This cannot be guessed but could be acquired from banks by masquerading as a victim using their identity credentials lifted from the statement and Internet Web sites.

Sense of Security chief operating officer Murray GoldSchmidt said the banks are dealing with more risky fraud vulnerabilities.

“Some 72 percent of fraud is card-not-present, or online fraud; the amount of fraud through other means is smaller and could be at a level.

“Online databases of credit cards are clearly an easy way for criminals to extract large amounts of data in the time it would take to steal a few [paper] statements.”

A source at another card issuer agreed that the standard was focused on “frying bigger fish”, although they did say that putting the numbers on statements was a clear breach of standard requirements.

The industry has struggled to adhere to the standard since its introduction some five years ago, even after the November 2010 deadline meant non-compliance would bring financial penalties. Banks have allegedly been absorbing penalties, a practice Lewis expects will continue into the near future.

This article was first published at ZDNet Australia.

Google extends two-step log-in process to all

Now all Google users can take advantage of the two-step log-in procedure previously available to Google Apps customers.

The company started rolling out the option to use two-step verification to Google Account holders Friday, according to a blog post. The idea comes from a classic security tactic, the notion that accounts are more secure when you log in using two factors: something you know, such as a password, and something that only you have, such as your phone.

Google Apps users started using this feature in September. Account holders log in to Google as usual, but the first time they enable the two-step process they will receive a code via a voice call or text message, or they can generate their own code using a mobile app available for iPhone, Android, or BlackBerry. That code can be saved for 30 days.

Obviously it will be much harder for anyone bent on hacking your account to steal a code sent to your phone (unless you’re a valuable enough target to warrant stealing your phone and hacking your password). It’s an optional feature, but one strongly recommended by security experts.

Experts renew call for greater Facebook security

With security threats continuing to plague Facebook, such as the recent abuse of CEO Mark Zuckerberg’s fan page, experts have renewed calls for the social networking site to step up user protection and education.

Zuckerberg was not the only prominent personality to suffer from a Facebook page hack last month–French President Nicholas Sarkozy was also a victim, according to the Huffington Post. The two high-profile incidents happened in the same week.

Yet Facebook, according to these security observers, remains extremely popular despite these incidents and other threats such as rogue apps.

On one hand, Facebook wants compelling applications to attract new subscribers and increase the amount of time users spend on the site. However, there are less than stringent controls on developers.

“Anyone can sign up and create a bogus Facebook application,” said Chester Wisniewski, senior security advisor at Sophos, in an e-mail interview, adding that users who are affected can be redirected to malicious URLs without being prompted.”

This, he explained, happened with the Koobface worm, which prompted users to download a “FacebookPhotos###.exe” file even before requesting permission for data access.

Wisniewski added that this form of “clickjacking” still occurs, but Facebook claims it is a “browser problem”.

In an earlier report published by ZDNet Asia’s sister site CNET, Facebook’s chief security officer Joe Sullivans was quoted as saying the team does not practice the “gatekeeper approach” when it comes to apps vetting. Instead, it “devotes its energy to the ones that could cause the most damage if they were bad”.

Measures taken, but more can be done
To its credit, Facebook has activated “advanced security controls” to protect at-risk accounts. According to the CNET report, when an account is detected as having an unusually large number of posts, or posting dubious links, the “roadblocks” devised by the team will direct the user to a McAfee cleanup tool that can be used immediately.

The team, which includes staff dedicated to incident response, has also just rolled out the HTTPS (hypertext transfer protocol secure) encryption feature for all activities, not just password entering.

Still, the approach was challenged by Wisniewski, who claimed that security should be adopted from “inside out”, such as configuring the firewall, and not the other way round. To that end, Facebook should make HTTPS a default, not something for the user to opt into, he argued.

“Facebook has taken the opposite approach and I feel [its] users will pay the price in privacy and security until it chooses to implement stronger privacy controls in reaction to these incidents,” said Wisniewski.

Randy Abrams, ESET’s director of technical education, also agreed Facebook can do more for its users. “Facebook doesn’t consider security to be enough of a priority to even mention the word on the log-in screen.

“Facebook can and should do a lot more to promote security education with their users.”

Users an ‘unsolved vulnerability’
Likening Facebook to an “operating system” such as Microsoft Windows, Abrams said it will be subject to security breaches and not be able to protect everyone.

“An operating system is designed to run programs, but it can’t know if the program is good or bad,” he explained.

While Facebook is far from facing a security crisis, Abrams said its users remain “the biggest unsolved vulnerability which Facebook falls flat on its face”.

Sophos’ Wisniewski concurred, noting that users “simply don’t care” about security.

Users, he pointed out, do not seem to be aware of the security issues associated with Facebook; security breaches have also not stopped those concerned and worried about their profiles, from logging in and sharing their lives on the site.

Other sites beware
Other social media sites are also equally at risk, even though their user base may be smaller, warned both experts.

According to Abrams, apart from the user base, there are risk factors such as ease of attack and an attacker’s own motivations. “Other social media sites are equally susceptible but may not get as much attention from the criminal element,” he said, adding that criminals are always on the lookout for vulnerabilities.

No matter how secure a Web site is, users cannot prevent their profiles from getting hacked, said Abrams and Wisnewski. One important way of staying safe is to limit the information that is made public, they noted.

In addition, users should set strong passwords that are not recycled for other sites, and enable the HTTPS option when it is available in the profile.

“Ultimately if a social media site is hacked badly enough then your profile and all of its information is owned by someone else. The risk is rather small, but it is there, so think carefully about what information you put online anywhere,” Abrams warned.

Anonymous hacks security company, say reports

Anonymous, a group of online activists, has attacked a security company that was investigating the collective.

The website of HBGary Federal was defaced with a message from Anonymous, as the group had discovered that HBGary Federal was planning to divulge alleged members of Anonymous to the FBI. In addition, Anonymous downloaded over 60,000 emails from HBGary Federal and posted them on The Pirate Bay file-sharing website, according to security company Sophos.

“You think you’ve gathered full names and addresses of the ‘higher-ups’ of Anonymous? You haven’t,” the group posted on the HBGary Federal website. “You think Anonymous has a founder and various co-founders? False.”

Read more of “Anonymous hacks security company, say reports” at ZDNet UK.

Cloud a haven for cybercriminals

The affordability and increasing popularity of cloud services are providing a new avenue for cybercriminals, say industry observers who note that service providers play a role in curbing such illegal activities. However, they warn that doing so will not be an easy task.

A security researcher last month warned that cloud services can be exploited for criminal purposes. At the Black Hat security conference, Thomas Roth said he was planning to release an open source kit which will enable users to crack Wi-Fi passwords by leveraging the computing power of the Amazon Web Services (AWS) cloud running on GPU-based servers.

There are other similar tools that use leasable cloud services to crack Wi-Fi security authentication mechanisms, such as Wi-Fi Protected Access (WPA), using the cloud infrastructure’s processor cluster to run dictionary attacks.

According to security players, the accessibility of such tools is not uncommon.

In an e-mail interview, Ronnie Ng, manager of systems engineering at Symantec Singapore, pointed to a 2009 blog post which noted that a Web site was purportedly selling automated Wi-Fi Protected Access (WPA) password crackers that used cloud computing technology.

The site allowed anyone to “pay a token sum of US$34 to rent time on a large 400-node computer cluster and check over 135,000,000 potential passwords against a targeted victim in just 20 minutes”. The Symantec blogger noted that even without technical knowledge, a malicious attacker would be able to obtain and use the password for illegal means such as to spy on the victim’s network.

Magnus Kalkuhl, director of Kaspersky Labs’s Europe global research and analysis team, also noted that cloud infrastructure has been misused for hosting malware. He told ZDNet Asia in an e-mail that there have been instances in the past where Amazon Elastic Compute Cloud (Amazon EC2) was used as malware hosting platforms, including a recent instance in which a trojan was spread using Rapidshare.

Kalkuhl noted that, in fact, certain malware “for years” have already been running on their own cloud. “Actually all DDoS (distributed denial-of-service) attacks and spamming services offered by cybercriminals are based on a cloud architecture, [which is] their own botnets made of thousands or even millions of infected PCs.”

In an e-mail interview, Paul Ducklin, head of technology for Sophos Asia-Pacific, added: “Almost anything you can do in the way of cybercrime on a standalone PC can be achieved through the cloud.”

In fact, he noted that cloud-based services such as social networks can make cybercrime easier.

Spams and scams can spread on Facebook, for instance, without ever raising an alarm on the user’s PC, Ducklin explained, noting that the benefit of distributing content automatically from many users to many users over social networks can work to the advantage of cybercriminals.

Responsibility on service providers
With more users moving onto the cloud platform, Ng cautioned that criminal activities on the cloud will rise.

“The cloud’s growing popularity will increase the risk of [users] being targeted by cybercriminals,” he said. He noted that the onus is on cloud service providers to “demonstrate due diligence” in ensuring organizations that lease their services do not engage in malicious activities.

Ducklin concurred: “Why would [businesses] be willing to store [their] data with a cloud provider that also allows cybercrooks and dodgy operators to use its services?”

Citing the case of DDoS attacks related to Wikileaks, he stressed that other users can be affected if a service provider is indiscriminate about whom it provides its services to.

“If your cloud provider services a wide range of businesses, the chance that one of them might become the victim of vigilantes carrying out a DDoS attack is higher,” Ducklin said. “You might lose quality of service due to sociopolitical problems suffered by someone else ‘in your cloud’.”

But while the security players agreed that cloud service providers should be vigilant when providing services, they noted that ensuring total control is not easily achieved.

Kalkuhl said concerns over privacy limit service providers’ ability to have complete control.

“Major cloud service providers like Amazon may check outgoing traffic for suspicious patterns such as DDoS attacks against other machines, [as well as instruct] customers who use virtual machines to conduct system penetration tests to inform the service provider in advance.

“However, it is not possible for the providers to scan the content of [network] traffic for keywords or malware signatures, for instance,” he explained. “Neither are they allowed to scan or manually check what files are stored in a provided [cloud] environment. Otherwise, people would lose their trust in cloud providers and the whole business model would be put at risk.”

Microsoft to seal 22 security holes this month

Microsoft has said it will address 22 vulnerabilities as part of this week’s Patch Tuesday, three of which are critical.

Three of the 12 bulletin items released by Microsoft earlier today are classified as critical, and affect Microsoft’s Windows operating system, with one affecting Microsoft’s Internet Explorer browser as well. The rest are classified as “important”.

In a post on Microsoft’s Security Response Center blog, the company said it will be making fixes for vulnerabilities in the Windows Graphics Rendering Engine, as well as a CSS exploit in Internet Explorer that could allow an attacker to gain remote code execution.

Along with the fixes for the rendering engine and the CSS exploit, Microsoft says it will be addressing zero-day flaws that created vulnerabilities in the FTP service found inside of Internet Information Services (IIS) 7.0 and 7.5.

Not included in this month’s batch of announced patches is a fix for the recently-discovered script injection attacks that affect Internet Explorer. Acknowledged by the company last month in Security Advisory 2501696, the exploit targeted the way IE handled MHTML on certain types of Web pages and document objects, and could provide hackers with access to user information. According to Wolfgang Kandek, chief technology officer at Qualys, the best route to prevent those attacks continues to be the workaround Microsoft outlined in its initial security advisory about the problem.

Microsoft has a full list of the pending issues here.

Report: Hackers penetrated Nasdaq computers

Federal authorities are investigating repeated intrusions into the computer network that runs the Nasdaq stock exchange, according to a Wall Street Journal report that cited people familiar with the matter.

The intrusions did not compromise the tech-heavy exchange’s trading platform, which executes investors’ trades, but it was unknown which other sections of the network were accessed, according to the report.

“So far, [the perpetrators] appear to have just been looking around,” one person involved in the Nasdaq matter told the Journal.

The Secret Service reportedly initiated an investigation involving New York-based Nasdaq OMX Group last year, and the Federal Bureau of Investigation has launched a probe as well. Investigators are considering a range of motives for the breach, including national security threat, personal financial gain and theft of trade secrets, the newspaper reported.

Nasdaq representatives could not be reached for comment.

Investigators have not been able to follow the intruders’ path to any specific individual or country, but people familiar with the matter say some evidence points to Russia, according to the report. However, they caution that hackers may just be using Russia as a conduit for their activities.

The Nasdaq, which is thought to be as critical from a security standpoint as the national power grid or air traffic control operations, has been targeted by hackers before. In 1999, a group called “United Loan Gunmen” defaced Nasdaq’s public Web site with a story headlined “United Loan Gunmen take control of Nasdaq stock market.” The vandalism was quickly erased, and Nasdaq officials said at the time that the exchange’s internal network was unaffected.

Aust pubs tap biometrics to curb violence

Pubs and clubs in Australia are signing up in droves to national and state biometrics databases that capture patron fingerprints, photos and scanned driver licenses in efforts to curb violence.

The databases of captured patron information mean that individuals banned at one location could be refused entry across a string of venues. Particularly violent individuals could be banned for years.

The databases are virtually free from government regulation as biometrics are not covered by privacy laws, meaning that the handling of details are left to the discretion of technology vendors.

Venues typically impose bans of one month to a year, and it is up to the discretion of clubs to adopt or share exclusion lists.

Australia’s largest database idEye, which pitches itself as the only national repository, has said that it has received an explosion of venues signing up to share lists.

“The takeup is growing very rapidly,” said Peter Perrett, chief executive of ID-Tect, the company which created idEye. “It has exploded.”

“You don’t get on the list because you didn’t want to go home–you get on there because you are a safety risk.

“Bans are only effective from one venue, but you will also be flagged…it will pop up and show that this guy is banned, here are three photographs, his details and the offence.”

Venues may choose to accept or ban any individual on the list, and data is encrypted and stored on “secure servers”.

State governments have been cracking down on violence in pubs and clubs, and threatening to impose tough measures on the worst offenders and impose night-time curfews.

The national database can be tweaked to suit a venue, allowing them to source different patron identifiers such as facial recognition, optical character recognition or fingerprint scans.

Perrett would not be drawn further on the database’s adoption, citing commercial sensitivity, but said it is “a lot larger in [use and adoption] than you’d think”.

While patrons remain divided on the need to surrender biometric data to buy a beer, the system appears to have led to a halt in violence in pubs and clubs.

The Woodport Inn on the NSW Central Coast has obliterated the incidents of violence which had once troubled its night club.

“[The] violent people here are gone, just gone,” said one bar manager. “They are scared of it. They know they will be caught.”

The venue is one of several in the area that use NightKey fingerprint scanners, including the Central Coast Hotel and Woy Woy Leagues Club, but it does not share ban lists.

A manager from a Sydney CBD bar, who requested anonymity, said that the ban database had cut violence, adding that the venue may soon be able to reduce its security headcount. The machines are not classified by NSW Police as security equipment and can be operated by a staff member.

Alcohol-related incidences have dropped by up to 80 percent in some venues that use the scanners, according to Perrett. He said the data is a smoking gun that police can use to convict violent offenders.

He said that “very, very serious crime in major places” carried out by offenders currently up before the courts has resulted from investigations lasting “minutes” rather than weeks because of being able to link biometric data to CCTV footage.

Used alone, Perrett said CCTV is inefficient and offenders “are not worried about it”. He added that crime in venues is unreported due to the negative publicity it generates.

The patron data collected in the database is destroyed within 28 days unless an offence is committed beforehand. The data is not automatically fed into police records.

However, many might be concerned about the privacy implications of the collection of such data.

Biometrics Institute head Isabelle Moeller said that pubs and clubs are still refusing to sign onto its biometric charter of use, which has the backing of the Federal Privacy Commissioner.

“[Venues] may roll biometrics out innocently or they may not want to bother with privacy concerns,” Moeller said. “Biometrics needs to be part of privacy law, the government needs to take control of this.”

She said that Clubs NSW has agreed to sign onto the charter and will participate in upcoming biometric privacy discussions, but the reception from other states has been cold.

The Australian Hotels Association (AHA) (NSW) chief executive Sally Fielke said in a statement that the implementation of biometric scanners is a decision for individual clubs. “The introduction of ID scanning is a business decision for individual venues.”

“The AHA (NSW) encourages members to look at a whole range of proactive initiatives to continue to ensure that their venues remain safe…and assists venues to comply with all legal obligations including privacy laws.”

Fielke said that the take-up of the services by AHA (NSW)’s members was low.

It did not respond to questions about whether it would recommend venues use biometric scanning.

This article was first posted on ZDNet Australia.

Microsoft warns of Windows zero-day flaw

Microsoft has warned of a zero-day vulnerability in Windows that could let an attacker collect any information stored in an Internet Explorer user’s browser.

The flaw allows a hacker to inject a malicious client-side script in an otherwise legitimate Web-request response made by the Internet Explorer (IE) browser, Microsoft said in a security advisory on Monday. The script could post content or perform actions online that would appear to have been initiated by the victim.

Alternatively, the vulnerability, which lies in the MHTML Web protocol, could allow the script to collect an IE user’s information, or spoof content displayed in the browser to “interfere with the user’s experience”, Microsoft security advisor Angela Gunn said in a blog post.

Read more of “Microsoft warns of Windows zero-day flaw” at ZDNet UK.

Anonymous: UK arrests are a ‘declaration of war’

Anonymous has issued a warning to the U.K. government after five young men suspected of being connected to the group were arrested on Thursday.
The group, which has claimed responsibility for a series of distributed denial-of-service (DDoS) attacks launched in support of whistle-blowing site Wikileaks, said it viewed the arrests as “a declaration of war” by the British authorities.

“Anonymous believes… that pursuing this direction is a sad mistake on your behalf. Not only does it reveal the fact that you do not seem to understand the present-day political and technological reality, we also take this as a serious declaration of war from yourself, the U.K. government, to us, Anonymous, the people,” the group said in a statement (PDF) on Thursday.

Read more of “Anonymous: UK arrests are a ‘declaration of war’” at ZDNet UK.

A new (old) way to protect privacy: Disclose less

A new pilot project from Microsoft and IBM offers a high-tech twist on this bit of common sense: allowing you to divulge less information about yourself protects your privacy.

Their joint effort is built on the observation that, in many cases, there’s no need for someone verifying your credentials to know everything about you. A bouncer at a nightclub needs to know that you’re 21, not your name or home address. A county database may only require proof that you’re a local resident, not your phone number or e-mail address.

Microsoft and IBM’s solution is called Attribute-Based Credentials, or ABC, and their pilot project is scheduled to be announced tomorrow to coincide with what’s being called Data Privacy Day. ABC is supposed to last four years and result in both a credential architecture and a reference implementation complete with source code that will be made publicly available.

“Our goal is to provide the technical tools but also the societal discussions about how we can achieve privacy in an electronic society,” Jan Camenisch, a Zurich-based cryptographer with IBM Research told ZDNet Asia’s sister site CNET.

The first application is scheduled to appear at Norrtullskolan, a secondary school in Sëderhamn, Sweden, and will allow students and parents to communicate with school officials and access a social network–while protecting their privacy at the same time. Another pilot will be implemented for grading the faculty at the Research Academic Computer Technology Institute in Patras, Greece.

Both pilot project rely on a system called ABC4Trust, which is designed to allow students or parents to “prove” certain aspects of their identity without revealing others. A student can cryptographically prove that she’s a member of a sports team, or demonstrate that he has attended a certain class.

“The problem with today’s solutions is that they don’t make these kind of distinctions,” Ronny Bjones, a Microsoft security technology architect, said. “We leave such a digital footprint around on all these different sites.”

One likely application for the ABC system: electronic identity cards issued by national governments. Microsoft has already demonstrated a system that can verify that someone is at least 18 years old and resides in Berlin, without disclosing an actual birthdate.

The idea of using encryption technology to enable people to disclose less about themselves isn’t exactly new. The legendary cryptographer David Chaum, the father of digital cash who’s now building secure electronic voting systems, developed some of these ideas in the late 1980s.

A decade later, University of Pennsylvania computer scientist Matt Blaze and other researchers published a paper (PDF on what they called “decentralized trust management.” But it was Dutch cryptographer Stefan Brands who fully developed the concept of limited disclosure digital certificates to its fullest.

Microsoft bought Brands’ company, Credentica, in 2008, and released the U-Prove specification last year along with a promise not to file patent lawsuits over its use.

ABC will use both U-Prove and IBM’s related technology called Identity Mixer. “It’s extremely important that we can help people that build solutions (that) build privacy by design,” Bjones said.

This article was first published as a blog post on CNET News.

UK police nab 5 Anonymous DDoS suspects

U.K. police have arrested five young men on suspicion of taking part in distributed denial-of-service attacks launched by Anonymous, the group that has targeted corporate sites for attack in defence of Wikileaks.

The five, who are aged between 15 and 26, were detained at 7am on Thursday at addresses in the West Midlands, Northamptonshire, Hertfordshire, Surrey and London, the Metropolitan Police Central eCrime Unit (PCeU) said in a statement. The suspects were taken to local police stations and remain in custody, the police added.

The Anonymous group of activists undertook a number of distributed denial-of-service (DDoS) attacks last year, using a tool called the Low Orbit Ion Cannon (LOIC) to try to overwhelm servers. The group successfully took down websites belonging to companies including Visa, MasterCard and PayPal, in protest at their suspension of donation-payment processing for the Wikileaks whistle-blowing operation.

Read more of “Anonymous DDoS swoop results in five arrests” at ZDNet UK.

Facebook lets users turn on crypto

Facebook announced Wednesday it is now offering users the ability to use encryption to protect their accounts from being compromised when they are interacting with the site, something security experts have been seeking for a while.

The site currently uses HTTPS (Hypertext Transfer Protocol Secure) when users log in with their passwords, but now everything a user does on the site will be encrypted if he turns the feature on, the company said in a blog post.

Enabling full-session HTTPS eliminates the ability for attackers to use tools like the Firefox plug-in called Firesheep to snoop on communications between a person’s computer and the site’s server.

“Starting today we’ll provide you with the ability to experience Facebook entirely over HTTPS. You should consider enabling this option if you frequently use Facebook from public Internet access points found at coffee shops, airports, libraries, or schools,” the post says. “The option will exist as part of our advanced security features, which you can find in the Account Security section of the Account Settings page.”

Using HTTPS may mean that some pages will take a little bit longer to load, and some third-party applications aren’t currently supported, the company said. The option is rolling out over the next few weeks. “We hope to offer HTTPS as a default whenever you are using Facebook sometime in the future,” the post says.

“Every user’s Facebook page is unique and it’s been complex pulling together all the different parts,” said Facebook Chief Security Officer Joe Sullivan when asked what the time frame is to making HTTPS the default setting. “It’s an interesting technical challenge for the company.”

While banking and e-commerce sites use encryption, social media and other sites have been somewhat slow to move in that direction–the exception being Google. Google has always offered Gmail users the ability to use HTTPS and set it as a default a year ago. The company also offers encryption for use with Google Docs and Web search.

Facebook blames bug for Zuckerberg page hack

A bug allowed an unidentified person to post a message on Facebook CEO Mark Zuckerberg’s fan page on the site yesterday, a spokesman told ZDNet Asia’s sister site CNET on Wednesday.

The odd message that garnered more than 1,800 “likes” and more than 400 comments before it was taken down was: “Let the hacking begin: If facebook needs money, instead of going to the banks, why doesn’t Facebook let its users invest in Facebook in a social way? Why not transform Facebook into a ‘social business’ the way Nobel Prize winner Muhammad Yunus described it? http://bit.ly/fs6rT3 What do you think? #hackercup2011”

A Facebook spokesman provided this e-mail statement today: “A bug enabled status postings by unauthorized people on a handful of public pages. The bug has been fixed.”

Whoever is responsible only had the ability to post on the page and did not have access to private data on the Facebook account, Joe Sullivan, chief security officer at Facebook, said in a follow-up interview with CNET. “It was a very limited bug in that it only applied to the ability to post,” he said.

Specifically, the bug was in an API (application programming interface) that allows publishing functionality on the site, said Ryan McGeehan, security manager for incident response at Facebook.

Only a handful of high-profile accounts were affected, they said, declining to offer exactly whose pages were targeted. They also declined to comment on whether the hack earlier this week of French President Nicolas Sarkozy’s Facebook page was related. Someone had posted a message on the official’s page saying he would be stepping down next year.

Asked if they knew who was responsible for the breaches, Sullivan said he could not comment further because it is an active investigation.

“It’s astonishing the level of speculation without accurate information” in published reports, he said. “There was the (false) assumption that there was unauthorized access to information…Our commitment is to try and prevent that and respond incredibly quickly when something happens.”

“Facebook users–famous or not–need to take better care of their social-networking security,” said Graham Clulely, senior technology consultant at Sophos, in a statement. “Mark Zuckerberg might be wanting to take a close look at his privacy and security settings after this embarrassing breach. It’s not clear if he was careless with his password, was phished, or sat down in a Starbucks and got sidejacked while using an unencrypted wireless network, but however it happened, it’s left egg on his face just when Facebook wants to reassure users that it takes security and privacy seriously.”

Sophos elaborated more about the incident in its security blog.

The odd message posted to Zuckerberg’s fan page relates to Facebook’s announcement last week that it had raised US$1.5 billion at a US$50 billion valuation; US$1 billion of it comes from investment bank Goldman Sachs, which opened up the round to participation from wealthy overseas clients.

Also today, Facebook announced that it is now offering users the ability to secure their connection with the site using HTTPS (Hypertext Transfer Protocol Secure). It is rolling the option out to users and hopes to offer it as a default in the future. Enabling full-session HTTPS will eliminate the ability for attackers to compromise Facebook accounts by using tools like the Firefox plug-in called Firesheep.

CNET’s Caroline McCarthy contributed to this report.

RSA muscles up on core capabilities

newsmaker RSA COO Tom Heiser doesn’t consider himself a visionary because he “cannot predict where things are going to be in five years”. But the company veteran is certain about one thing: security will be an increasingly critical component as cloud and mobile adoption continue to grow.

Heiser joined EMC, which acquired RSA in 2006, as a sales trainee in 1984 after graduating from the University of Massachussetts. The executive progressed through 12 positions within the company before landing up at the EMC security arm in July 2008.

With over 26 years of experience under his belt, the COO considers formulating and executing strategies his strongest suit–skills that are critical in building up RSA’s core strengths in authentication and security management, which he described as “hot growth areas”–thanks to the rise of cloud and mobile computing.

Recently in Singapore to meet up with sales partners, Heiser met up with ZDNet Asia to discuss RSA’s business plans and chat about new year resolutions and the risks in migrating to cloud computing.

It’s been three years since the economic downturn in 2008 and things are finally looking bullish for the global economy. Is one of RSA’s new year resolutions to capitalize on this upswing and enter new markets?
There’s this book called Profit From The Core which we use as a template, and this talks about how close we should stay true to one’s core businesses.

Using this as part of our strategic planning process, we determined that RSA has three cores to our business. One core is authentication, the second is security management, while our third “emerging” core is around virtualization and cloud computing.

Are we branching out of these? Probably not. I mean, we take a look at the whole landscape of security, and we see what’s hot, where’s the growth. Security management is super hot, virtualization and cloud computing is crazy hot, so we’re already in these hot, high-growth areas.

What we don’t want to do is delude ourselves. You won’t see us getting into network-based security or endpoint-based security, firewall or antivirus. Those are big but, like antivirus, super slow growth and ripe for disruption. You can take a look at the numbers–antivirus is estimated to be effective 35 percent of the time. So, we’re assuming the firewall will be breached and antivirus won’t work.

Where do you see RSA’s focus heading in 2011?
What RSA has done is we have assembled a portfolio of products, solutions and services into a suite that addresses customers’ challenges. IT spend is supposed to grow 4 to 6 percent this year, and the security market is supposed to grow 9 percent. If you look at these figures, security is twice what the IT spend is. This demonstrates that we’re in areas of high growth.

One of these areas is in security management. We’re putting RSA’s enVision, security information and security management, data loss prevention (DLP) and Archer Technologies’ GRC (governance, risk and compliance) products into a suite, which is where customers are spending their dollars.

The other trend is the explosion of virtualization and cloud computing, and their associated risks. We have tons of data on that, and one statistic that jumped out at me was that 91 percent of CIOs are concerned about security with cloud deployments. Another survey showed that 51 percent of CIOs said security was their No. 1 concern. So, we’re attacking this concern and our portfolio is uniquely positioned to capitalize on that.

That would mean that some companies still can’t quite manage the security risks involved when moving to the cloud?
Absolutely. It’s something I see all the time.

About two months ago, for instance, we were talking to one of the top five global healthcare companies which recently completed a huge private cloud deployment. The company was very progressive and driving cloud for cost savings and operational efficiencies. So it was virtualizing its IT infrastructure and was going crazy with that.

But when we met the CIO and his team, he was, like, ‘I need a strategy to keep up with this thing’. He wasn’t involved in the upfront deployment, so now what he’s doing is playing catch-up with how to protect that environment. This happens all the time.

I wouldn’t call the CIO’s reaction as panic, but you could see huge concern on his part where it was reactive rather than proactively building security into the company’s cloud deployment.

You identified authentication as one of RSA’s core areas. Could you give us a glimpse of authentication innovations that are on the cards?
If we go back seven years ago, over 80 percent of RSA’s business was SecurID. In 2011, this will be the first year that SecurID constitutes less than half of our business. It’s not that the business is declining, but that all the other areas are seeing high growth.

If we fast forward, we still have the largest market in authentication but what we’re doing is deploying it in a cloud environment, which is the next big thing.

Mobile authentication is also a big growth area for us. There are over 300 million identities we’re protecting through our software-as-a-service (SaaS) application products. There’ll also be other things through mobile and non-token-based authentication, which are coming up real soon.

Mobile security presents a huge opportunity for us. How do we protect smartphones and make sure these are secured? The other challenge is how we can turn this device into an authenticator.

So these are great opportunities on both fronts: to secure the device, and using the device to secure.

Rivals such as Dell Computer, which acquired storage vendor Compellent last month, and Hewlett-Packard have been pretty active on the acquisition front. Are you planning to join in on the M&A (mergers and acquisitions) fray?
We will be acquisitive, mark my word on that.

Acquisitions aside, though, we’re driving a lot of internal innovations as well. So, we’ll stay true to our core, but we’re going to complement it both organically with our own development as well as through M&A activities.

You’ve been with EMC since 1984, fresh out of graduating from the University of Massachusetts. Ever thought of doing something else, like, investing in your own startup?
You know it’s an interesting question because I once thought of becoming a venture capitalist (VC). But, I’m not a visionary, I can tell you that now. I think I’m very good with execution, and I can develop a strategy but I can’t predict where things are going to be in five years.

I probably picked only one stock to invest in in the past five years–General Electric at US$8 a share–because I knew it wasn’t going to go under. That’s why I never became a VC!

Today, I put everything into my work and family but leave the rest, such as investing, to the professionals.

Did you plan to stay with the same company for so long?
I didn’t plan for it. I would have bet anything that I wouldn’t have been with the same company for 26, almost 27 years. Never in a million ways would I have planned it the way my career has panned out.

In fact, I was 22 years old when I first started out and I wanted to work for IBM, but that offer didn’t come in until after I started with EMC. By then, Roger Marino, one of the founders of EMC, wouldn’t let me quit. I still see him socially and I thank him for keeping me here every time.

I don’t know if you consider it a role or a job but, to me, I had about 12 different jobs in my almost-27 years at EMC. That has allowed me to stay fresh and learn. It’s like every time I’m wrapping up a role, they would say, ‘Hey, do you want to run M&A?’ and I’d think, ‘I’d love to run M&A!’ So I go run M&A. Or ‘Hey, RSA’s got some changes going on’ and I’d say ‘I love RSA! They’ve got so much potential’, and there I go. It’s just been unbelievable for me.

In one sense, being at EMC is all I know, and yet, it’s also kind of embarrassing. But who knows what’s next? One of my tenets is to do the best job possible and your career and compensation will follow. It’s a little bit idealistic, but I haven’t seen anybody following this motto not get rewarded by it.

Retailer’s Web site hack exposes credit card details

Cosmetics company Lush has warned customers that its U.K. Web site has been hacked repeatedly over the past three months, exposing credit-card details to fraudulent use.

Lush did not release technical details of the attack, nor specify the number of customers compromised or the security techniques used to handle the data involved, but anecdotal evidence indicates that some customers have been the victims of fraud.

The company sent an email statement to customers last Thursday outlining the incident and urging them to contact their banks.

Read more of “Attacks on Lush website expose credit-card details” at ZDNet UK.

Hackers target carbon emissions trading market

In a digital heist reminiscent of a John le Carré novel, more than US$9 million worth of greenhouse-gas emissions permits were stolen from the Czech Republic electricity and carbon trading registry last week and transferred to accounts in other countries, at the same time as the Prague-based registry office was evacuated due to a bomb threat.

That electronic theft, the latest in a series of security breaches affecting the market for carbon emissions, led the European Commission to suspend transactions in national European Union registries last Wednesday for a week.

“Three attacks have taken place since the beginning of the year and other registries are known to be vulnerable to similar attacks,” the European Commission said in a statement last Friday. “The Commission’s best estimate is that roughly 2 million allowances, representing a total of less than 0.02 percent of allowances in circulation, have been illegally transferred out of certain accounts.” The much-larger carbon futures market was not affected, the agency said.

Valued at 14.48 euros each, those 2 million allowances would be worth about US$39.4 million based on last Friday’s trading.

Carbon emissions allowances, or permits, are not your typical computer hacker target. Similar to other commodities that are traded on spot and futures markets, European Union Allowances permit energy companies and industrial factories to trade their pollution permits by buying and selling allowances allocated by their government. For instance, a Romanian energy company that expects to emit less carbon dioxide for a particular year can sell its extra government-issued emissions allowances to a utility in Germany that expects to emit more carbon dioxide than its government permits.

Ostensibly, the trading system should be highly secure and trades carefully accounted for to prevent fraud and theft. But lax security at some of the registries and the fact that transactions can be completed quickly on the spot market are likely what is appealing to thieves, sources told ZDNet Asia’s sister site CNET.

“It seems it is relatively easy to access the registries in this country and other countries,” said Nikos Tornikidis, carbon portfolio manager at Blackstone Global Ventures, from whose account 475,000 allowances were stolen.

“Once you get your hands on the allowances, it is quite easy to sell them and the settlement is almost instantaneous,” he told CNET in an interview. “In a matter of hours you can get money out of the system. This doesn’t happen when you trade other things.”

The bomb threat coinciding with the theft of the allowances is just “too coincidental”, said a trader close to the matter who asked to remain anonymous. “The registries have lax security,” he said. “They don’t have mechanisms to filter the accounts” by serial number to prevent theft.

Some people suspect that an insider was involved, the trader said, adding that he believes it was computer hacking instead.

The market was operating normally until around 12:30 p.m. Tuesday when Prague police received a tip of a bomb threat and the offices of the Czech registry, OTE, which stands for Electricity Market Operator, had to be evacuated, according to Reuters.

Early the next morning, employees at Blackstone Global Ventures went to check their carbon permissions account and noticed that it had been nearly emptied out. In addition, the contact information on the account had been changed, something that should only be accomplished by someone with administrator privileges at the registry, said Tornikidis.

Blackstone reported the matter immediately to the Czech Republic registry and was able to find out the unique serial numbers for the missing allowances, he said. “I hope that we managed to stop the trading at a point where our allowances are with the first buyers after the hacker sold them,” he added.

The Czech Republic registry said a total of 1.3 million permits were missing from six accounts and that the digital assets were transferred to accounts in Poland, Italy, Estonia, Lichtenstein, and Germany, and possibly other countries, according to Reuters.

As custodian of the carbon emissions permissions, the OTE has a fiduciary obligation to account holders and should replace any that are missing, Tornikidis said.

“I don’t know how it is possible in today’s IT world that someone is able to hack into an account where someone’s assets are and transfer them out,” he said. “Why can’t they follow the money trail?”

Jiri Stastny, chief executive officer at the OTE in Prague, could not be reached for comment and other employees at the government-run registry directed all calls to him.

The Czech Republic is not the only country to have security problems crop up in the relatively new carbon emissions trading market. The Austrian registry reported theft of allowances due to hackers two weeks ago and 1.6 million allowances belonging to cement maker Holcim in Romania were reported stolen from that country’s registry in November. A year ago, 250,000 allowances were stolen in Germany after companies there were targeted by phishing attacks, according to reports.

The European Commission is likely to require additional security procedures at the national registries, such as passwords being sent to mobile phones or other two-factor authentication methods, according to a Bloomberg report.

This article was first published as a blog post on CNET News.

Malware toolkits guarded with stolen DRM

Malware writers are pinching anti-pirate technology embedded into some of the world’s most popular software to protect their own, according to Symantec.

The antivirus company said writers of complex malware toolkits can embed measures to prevent users from stealing their work.

This means the writers are able to rent the toolkits to non-technical users who then embed the malware into websites in hopes of duping victims out of information such as bank account details.

Writers may also take a commission in an “affiliate system” from the value of victim information stolen using the kits.

Anti-piracy measures used in the most popular software, including Symantec products, have been reverse-engineered and distributed over the internet.

“They are using the same Digital Rights Management (DRM) technology used as major software,” Symantec head Craig Scroggie said. “They are locking down their software for a minimal amount of use or they are changing the IP reply domain so they have to be involved in the sale.”

“They will build their own DRM, steal it from the big names or cobble it together.”

Most would-be buyers of the toolkits lack the technical understanding to reverse-engineer the DRM measures.

The price of a malware toolkit has risen substantially, Scroggie said, from about US$15 in 2006 to more than US$8000.

“The premium is because of the success rate,” Scroggie said.

This article was first published at ZDNet Australia.

S’pore government preps 2FA facility

SINGAPORE–The local government has set up a wholly-owned subsidiary to operate the country’s IT security facility focusing on two-factor authentication (2FA), which is part of an initiative first announced in 2005.

Called Assurity Trusted Solutions, the subsidiary will oversee operations of the national authentication framework (NAF), a nationwide security layer to authenticate online transactions between the government, businesses and citizens.

Officials from the Infocomm Development of Singapore (IDA) said at a media briefing here Thursday, that Assurity is scheduled to roll out its services in the second half of this year, offering 2FA services to service providers and consumers. ST Electronics has been contracted to design, build, operate and maintain the NAF infrastructure, in a deal spanning five years. When asked, IDA officials declined to reveal how much the contract was worth.

More details to follow…

Report finds smart-grid security lacking

Echoing concerns of security experts, a new report from the Government Accountability Office warns that smart-grid systems are being deployed without built-in security features.

Certain smart meters have not been designed with a strong security architecture and lack important security features like event logging and forensics capabilities used to detect and analyze cyberattacks, while smart-grid home area networks that manage electricity usage of appliances also lack adequate built-in security, according to the report released last week by the GAO, the auditing and investigative arm of the U.S. Congress.

“Without securely designed smart-grid systems, utilities will be at risk of not having the capacity to detect and analyze attacks, which increases the risk that attacks will succeed and utilities will be unable to prevent them from recurring,” said the report.

The report also took aim at the self-regulatory nature of the industry, saying utilities are focusing on complying with minimum regulatory requirements rather than having adequate security to prevent cyberattacks.

The National Institute of Standards and Technology “does not have a definitive plan and schedule, including specific milestones, for updating and maintaining its cybersecurity guidelines to address key missing elements”, the report concluded. One of the important elements NIST has failed to address is the risk of attacks that use both cyber and physical means, the report said.

“Furthermore, Federal Energy Regulatory Commission has not established an approach coordinated with other regulators to monitor the extent to which industry is following the smart-grid standards it adopts,” the report said. “The voluntary standards and guidelines developed through the NIST and FERC processes offer promise. However, a voluntary approach poses some risks when applied to smart-grid investments, particularly given the fragmented nature of regulatory authority over the electricity industry.”

In comments on the report that were included as an appendix, the Department of Commerce–which oversees NIST–says NIST “agrees that the risk of combined cyber-physical attacks on the smart grid is an area that needs to be more fully explored in the future.”

Meanwhile, FERC Chairman Jon Wellinghoff said in comments included in an appendix to the report that he will ask his staff to evaluate ways to improve coordination among regulators and assess whether challenges identified in the report should be addressed in FERC’s cybersecurity efforts, but will need to work within the commission’s statutory authority.

The goal of the smart grid is to improve reliability and efficiency by incorporating information technology systems into power lines and customer meters for monitoring power distribution and usage without having to send operators into the field.

(Via Threatpost)

This article was first published as a blog post on CNET News.

Australian university exposes student info

The University of Sydney has exposed thousands of student details including names, addresses and course information to public access via the Internet.

The details were stored in a way that allowed it to be accessed by altering identification numbers revealed in a university Web address.

University of Sydney vice chancellor spokesperson, Andrew Potter, said the details have been pulled offline and the university is investigating the matter.

“We confirmed that method of access was possible and immediately we shut it down,” Potter said. “We do not know as yet if details were compromised.”

Potter did not rule out contacting students to warn them of the breach, but was unsure if an IT forensic investigation was underway.

A review of logs could reveal if the details were compromised, but industry track records suggest many similar attempts do not.

“It depends on having the right logging, which is seldom the case,” HackLabs director Chris Gatford said.

Such vulnerabilities, where data can be accessed by entering sequential numbers into a URL address, are common and are often introduced by software developers.

But common mitigation efforts also fail.

“Developers move the identity from the URL to part of a post request, but it still doesn’t mitigate the vulnerability,” Gatford said. “You can use a local proxy then to identify that value and do the attack in the post of the request”.

The vulnerability was pointed out to the university by the Sydney Morning Herald, which also reported earlier this week that the university’s Web site and corporate Web pages had been hacked and defaced.

This article was first published at ZDNet Australia.

Two charged in AT&T-iPad data breach

Two men were charged with computer crimes today for allegedly hacking into AT&T servers and stealing e-mail addresses and other information of about 120,000 iPad users last summer.

Andrew Auernheimer, 25, was arrested in his home town of Fayetteville, Ark., while appearing in state court on unrelated drug charges, and Daniel Spitler, 26, of San Francisco, surrendered to FBI agents in Newark, N.J., according to the U.S. Attorney’s office in New Jersey. Both men were expected to appear before federal judges in Arkansas and New Jersey.

They each face one count of conspiracy to access a computer without authorization and one count of fraud in connection with personal information. They’re also looking at a maximum of 10 years in prison and a US$500,000 fine.

Auernheimer was ordered held until a bail hearing set for Friday, while Spitler was released on US$50,000 bail and ordered not to use the Internet except at his job as a security at a Borders bookstore, according to an Associated Press report. In comments to reporters outside the Newark courthouse, Spitler said he was innocent and that: “The information in the complaint is false. This case has been blown way out of proportion.”

Auernheimer told the magistrate that he had been drinking until 6:30 that morning and said of the complaint: “This is a great affidavit–fantastic reading,” according to the AP report.

Last June, Auernheimer told ZDNet Asia’s sister site CNET that members of his hacker group, which calls itself Goatse Security, uncovered a hole in AT&T’s Web site used by iPad customers on the 3G wireless network and went public with it by revealing details to Gawker Media.

Up until then, AT&T automatically linked an iPad 3G user’s e-mail address to the iPad’s unique number, called Integrated Circuit Card Identifier (ICC-ID) so that whenever the customer accessed the AT&T Web site, the ICC-ID was recognized, the e-mail address was automatically populated and the ICC-ID was displayed in the URL in plain text.

Spitler is accused of writing a script called the “iPad 3G Account Slurper” and using it to harvest AT&T customer data via a brute force attack on the site, which fooled the site into revealing the confidential information, according to the criminal complaint filed last week but unsealed and released publicly today.

The complaint includes Internet Relay Chat messages supposedly sent between Auernheimer and Spitler in which they talk about selling the e-mail addresses to spammers, shorting AT&T stock before releasing details of the breach, and destroying evidence.

“If we can get a big dataset we could direct market iPad accessories,” Auernheimer says in a message to Spitler, according to the complaint.

In another chat session included in the complaint, Spitler says he would like to stay anonymous so he doesn’t get sued. “Absolutely may be legal risk yeah, mostly civil you absolutely could get sued,” Auernheimer replied, the complaint read.

Before going to Gawker, Auernheimer also allegedly contacted Thomson-Reuters and the San Francisco Chronicle, and sent an e-mail to a board member at News Corp. whose e-mail address was leaked in the breach in attempts to get news articles written about the incident, according to the complaint.

Asked if he reported the hole to AT&T, Auernheimer replied “totally but not really…I don’t (expletive) care I hope they sue me”, according to the chat logs.

“Those chats not only demonstrate that Spitler and Auernheimer were responsible for the data breach, but also that they conducted the breach to simultaneously damage AT&T and promote themselves and Goatse Security,” the U.S. Attorney’s office said in a statement.

AT&T has spent about US$73,000 as a result of the breach, including contacting all iPad 3G customers to notify them, the complaint says. Among the iPad users who appeared to have been affected were White House Chief of Staff Rahm Emanuel, journalist Diane Sawyer, New York Mayor Michael Bloomberg, movie producer Harvey Weinstein, and New York Times CEO Janet Robinson.

Auernheimer told CNET last summer that the data exposed in the breach was contained. The concern was that iPad users who had their e-mail addresses exposed would then be at risk of receiving phishing or spam e-mail that appeared to be from Apple or AT&T but which was designed instead to trick them into revealing more information or downloading malware.

Auernheimer did not return an e-mail seeking comment, and Spitler could not be reached. AT&T did not immediately respond to a request for comment.

Auernheimer, a self-described Internet “troll”, was arrested last June when authorities found drugs while searching his home for evidence related to the AT&T-iPad investigation. He was later released on bail.

This article was first published as a blog post on CNET News.

App servers potential threat to mobile landscape

While both Web and app servers face pressing security issues, the latter is increasingly in the firing line as more users are now utilizing mobile devices to access apps. The risk is further exacerbated due to the fact that technologies behind app servers are more complex, cautioned a security executive.

According to Jonathan Andresen, technology evangelist at Blue Coat Systems Asia-Pacific, there are two factors behind the security challenges presented by app servers. First, the two-way communication between the user and the app server has intensified. This can result in users unknowingly “uploading” malicious content to an app server that is not protected, Andresen said in an e-mail.

Second, compared with Web servers, app servers need more CPU power, he said, noting that this makes app servers more vulnerable to denial-of-service (DoS) attacks.

These two factors, combined with a rise in threats targeting mobile devices, put app servers in an “especially challenging” position, he said.

Another security player agreed with Andresen’s observation.

Paul Oliveria, technical marketing researcher at Trend Micro, noted that many apps today are essentially “mini browsers” in which they gather user input, send it to a server and display the results for users to view.

Oliveria explained: “These [app] servers are vulnerable to all the usual attacks that traditional Web servers are vulnerable to, and in fact, probably more so.”

He pointed out that “almost anyone” can now develop an application and sell it. In the case of Google Android apps, for example, interested developers can simply submit an application form, pay US$25 and start developing apps.

Given the scenario, and for a relatively small investment required from the developers, he questioned whether these developers would be as committed, compared with more established developers, to beefing up their app server security.

To combat potential threat to app servers, Oliveria reckoned that any good and reputable developer would expect users to behave in unpredictable ways and code apps to restrict the type of information sent by users to the app server.

He also called on developers to pay attention to securing their server-side infrastructure which can be accessed not only via an app, but also through a Web browser or direct network connection.

Paul Ducklin, head of technology at Sophos Asia-Pacific, added that less is more with regard to the amount of information users should be allowed to access via app servers.

He noted that a traditional Web server is set up to help a company get as many people as possible to visit its corporate Web site and learn about its operations, but the Web administrator will only put up information that the company wants the public to see.

App servers, however, often give public access to information that is traditionally not made available to users outside the company, Ducklin noted.

“So developers need to ensure that when they make it easier for users to access the app servers [for more information], they don’t open up too much or they may experience their personal ‘Wikileaks moment‘,” he warned.

Andresen recommended deploying purpose-built security appliances such as application firewalls as a best practice to secure app servers. He explained that adding another layer in front of the application server would ensure security is not compromised, regardless of whether coding for the application is secure or not.

He also zoomed in on social networking apps, noting that with over 30 billion pieces of content such as Web links, blog posts and photos, shared on these platforms each month, it is “extremely difficult for application vendors to detect malicious content uploaded by users”.

In this landscape, it would not be viable for mobile users to deploy a complete PC-centric security tool on devices that have limited processing abilities, Andresen added.

“What users need is a lightweight browsing capability that can leverage the processing capabilities of a user-driven cloud network [to filter, validate and secure Web content delivered to mobile devices],” he surmised.

RSA: SMS bank tokens vulnerable

Mobile phone attacks will increase this year as criminals attempt to intercept SMS-based authentication tokens, according to security company RSA.

The tokens are designed to complement username and password log-in checks by requiring users to validate payments with unique numerical codes, in this instance sent by SMS.

It is becoming more popular, and the Commonwealth Bank of Australia claims to have 80 per cent of its customer base using tokens to validate third-party payments via SMS or through safer handheld token-number generators. The bank isn’t forcing customers to use it, but those who don’t will not be permitted to carry out high-risk transactions over NetBank.

RSA said in a 2011 predictions report that sending tokens via SMS will make phones a target.

“The use of out-of-band authentication SMS…as an additional layer of security adds to the vulnerabilities in the mobile channel,” the company said in its report.

“A criminal can…conduct a telephony denial-of-service (DoS) attack which essentially renders a consumer’s mobile device unavailable.

“SMS forwarding services are also becoming mainstream in the fraud underground and enable the [token] sent by a bank via text to a user’s mobile phone to be intercepted and forwarded directly to the cybercriminal’s phone.”

The company said that mobile phone smishing attacks, or phishing scams sent via SMS, will also rise this year.

“Success rates are higher with a smishing attack compared to a standard phishing attack, as consumers are not conditioned to receiving spam on their mobile phone so are more likely to believe the communication is legitimate,” the report said.

It said there are no effective technologies to prevent smishing.

The report also claimed that the infamous Zeus malware, widely blamed for most of the online transaction fraud, will merge with rival SpyEye to create a hybrid trojan.

It alleges that the new hybrid will include a kernel mode rootkit, improved HTML infection abilities and remote desktop access.

“Should [its creator] act on his plans, this already spells evolution in the type of commercially available malware likely to be sold in the underground in 2011,” the report read.

This article was first published on ZDNet Australia.

OECD: Cyberwar risk is exaggerated

While governments need to prepare for cyberattacks involving espionage or malware, the likelihood of a sophisticated attack like Stuxnet is small, according to a study by the Organisation for Economic Co-operation and Development (OECD).

In a cyberwarfare report (PDF) released yesterday, the OECD said that the risk of a catastrophic attack on critical national systems has been exaggerated. The majority of cyberattacks are low-level and cause inconvenience rather than serious or long-term disruption, according to a co-author of the report, professor Peter Sommer of the London School of Economics.

“There are many scare stories, which, when you test, don’t actually pan out,” Sommer said. “When you analyze malware, a lot is likely to be short-term, or fail.”

Read more of “Cyber-war risk is exaggerated, says OECD study” at ZDNet UK.

Facebook tweak reveals addresses, phone numbers

In what is potentially another privacy misstep, Facebook has made a change to a permissions dialog box users see when downloading third-party Facebook apps–a change that potentially makes users’ addresses and phone numbers available to app developers.

The tweak was made known to developers of third-party apps last Friday night, by way of a post on the Facebook Developer Blog. Basically, when a person starts downloading a third-party Facebook app, a “Request for Permission” dialog box appears that asks for access to basic information including the downloader’s name, profile picture, gender, user ID, list of friends, and more. What’s new as of Friday is an additional section that asks for access to the downloader’s current address and mobile phone number.

As mentioned in numerous media reports, the concern among Facebook users and privacy advocates is that users won’t notice the change and will click the dialog box’s Allow button unthinkingly. Further, people are worried that unscrupulous developers could cook up bogus apps with the sole purpose of capturing the private information–apps that wouldn’t necessarily be spotted and taken down immediately. Aside from the potential for outright hacking and identity theft, it’s not unheard of for app developers to sell information on Facebook users to data brokers.

Users of third-party Facebook apps can simply click the Don’t Allow button–which reportedly won’t interfere with a successful download–or they can remove their address and phone number from their Facebook profile.

Graham Cluely, with security company Sophos, suggested in his own blog post that users do the latter. (The post was brought to our attention by PC Magazine.)

“My advice to you is simple,” Cluely wrote, highlighting the following with boldface text, “remove your home address and mobile phone number from your Facebook profile now.”

Cluely also wondered if Facebook could have taken a safer approach.

“Wouldn’t it be better if only app developers who had been approved by Facebook were allowed to gather this information?” he wrote. “Or–should the information be necessary for the application–wouldn’t it be more acceptable for the app to request it from users, specifically, rather than automatically grabbing it?”

ZDNet Asia’s sister site CNET e-mailed Facebook a request for comment but hadn’t heard back by publication time.

Privacy was a major issue for Facebook last year, with the company provoking the concern of privacy advocates, lawmakers, and social-networking fans alike.

This article was first published as a blog post on CNET News.

App marketplace vendors mum on account hacks

Mobile app store vendors were coy about incidents related to account hacks when asked if they had preventive measures to safeguard hacked accounts from being exploited.

Following recent reports of hacked Apple iTunes accounts being sold on Chinese online auction site Taobao, ZDNet Asia queried app marketplace operators about security measures they implemented to protect accounts from being hacked and used illegally.

Chris Chin, Microsoft’s Asia Pacific director of developer marketing for mobile communication, said users who discover that their Windows Live ID has been compromised should recover their account by resetting their password. Windows Phone 7 users buy apps from the Microsoft Windows Phone Marketplace which is linked to their Windows Live accounts.

Chin added: “If you believe unauthorized Marketplace purchases were made with your account, contact our support team.” However, he did not reveal if there have been reports of hacked Windows Live accounts being used to buy apps illegally or the types of safeguards Microsoft has implemented to prevent such incidents from happening.

Chin, however, did say that the company is “focused on helping to educate people about what they can do to increase their online safety and reduce the risk of fraud”.

Noting that a common cause of compromised online accounts is threats from malware and phishing, he added that users should use a secure Web browser when surfing online.

Google declined to comment for the story

When contacted, Apple did not respond specifically to ZDNet Asia’s queries on what preventive measures it had implemented to protect its users. Instead, a company spokesperson pointed to a news report that revealed Taobao had since taken down auctions of hacked iTunes accounts and added that the Chinese company should instead be contacted for comments.

Taobao spokesperson, Justine Chao, told ZDNet Asia in an e-mail interview that the Chinese auction site removed the listing of hacked accounts after receiving complaints from Taobao users that the iTunes accounts sold were “not what they expected”.

“We had not been advised by Apple to take any action thus far,” she noted. “Our decision to remove the listings was done in the interest of protecting the consumers who shop on Taobao.”

Previous reports noted that the site was reluctant to take down the listings unless it receives “a valid takedown request”.

Hacked user shares experience
A ZDNet Asia reader, Kassandra, recalled the harrowing experience she encountered when her iTunes account had been hacked and used to purchase apps, and the long process it took to dispute the charges.

In an e-mail interview, she explained that she discovered on May 11, 2010, that her iTunes account was used to purchase apps that she did not download. The New York-based sales coordinator said the apps purchased were in Mandarin and were transacted in China.

She said she has always been careful about managing her financial information and frequently changes all her passwords. A credit card number she used was stolen once but Kassandra said she had taken care then to change all her credit cards.

When she realized the app purchases had been made illegally via her iTunes account, she tried to contact Apple but could not find a dedicated iTunes customer service number to call.

“Getting to talk to an actual human being [at Apple iTunes] was a process,” she recalled. “I e-mailed their customer service but I needed action to be taken immediately, so I called the main Apple customer service and just kept talking to whoever I could and asking to be transferred [to the relevant person].”

“They repeatedly told me to e-mail iTunes but I wouldn’t take that for an answer,” Kassandra said. Her perseverance was rewarded when she was transferred to a department handling Apple accounts and the customer service representative was helpful, she noted.

The representative then said the company would do whatever it could to resolve the issue but added that it was not possible for an iTunes account to be hacked. “I found out that wasn’t true when I searched online and found that many people have experienced their accounts getting hacked into,” Kassandra said.

She noted the Apple representative told her the bank would handle the money issue. However, she added that her bank had to contact Apple to dispute the charges, which racked up to over US$400. She added that she made frequent calls to the bank to make sure the dispute would be managed smoothly.

Kassandra said: “At one point, the bank was not going to take the charges off because it said the purchases ‘were similar to my purchase history with Apple’.”

While the dispute was eventually resolved, the incident has made her nervous about making purchases online. “I do not feel safe,” said Kassandra.

Another mobile user, Nicole Nilar, shared that while she is not worried about online security when buying apps, she is more concerned about purchasing fake applications. A senior digital marketing executive who owns an Android phone, Nilar told ZDNet Asia in an instant message interview that she had heard about illegitimate applications masquerading as real applications in Google’s Android Market.

“The developers rip off the screenshots of popular apps and sell them at a high price. It’s only after buyers have made their purchase before they realize they paid US$6 to US$8 for only a wallpaper,” she said.

While she noted that Apple might be too strict with its app ecosystem, she said Google should take a few leaves out Cupertino’s book and implement measures to ensure apps on its marketplace are legitimate.

Global spam traffic rebounds as Rustock wakes

Spam is on the rise after the Rustock botnet awoke from its Christmas slumber, according to Symantec.

On Monday the Rustock botnet, responsible for a significant portion of the world’s spam, resumed activity after pausing spam operations on Dec. 25.

“As Rustock has now returned, this means the overall level of spam has increased. MessageLabs Intelligence honeypot servers have seen an increase of roughly 98 percent in spam traffic between 00:00 and 10:00 today compared to the same period on Jan. 9,” Symantec wrote on Monday. “It is too early to say what effect this will have on global spam levels, or if this return is permanent, but at the moment it certainly seems as if the holiday is over and it’s now back to business as usual,” it said.

Read more of “Global spam traffic rebounds as Rustock wakes” at ZDNet UK.

Tablets unsafe for enterprise adoption?

With tablets becoming more popular on the consumer and enterprise front, experts agree that security is an element that must be dealt with, especially as more applications are developed to enhance their usability.

Edison Yu, manager for ICT practice at Frost and Sullivan, warned that it is “pertinent” for users to start being aware of the risks. Many of the apps, he said in an e-mail, “may actually look to leverage on the increasingly prevalent habit of users sharing their personal data around freely, and [enable] cybercriminals to steal and sell private information”.

According to Kwa Kim Chiong, CEO of JustLogin, the security risks tied to accessing apps via tablets are no different from that of accessing them via the Web. “Whichever means you choose to access the applications, there will be threats”, he said in an e-mail.

The head of the Singapore-based software-as-a-service (SaaS) provider added that the Wi-Fi which tablet users log on to, contributes to the overall risk level as the data transmitted could be intercepted by hackers.

However, Bryan Ma, associate vice president for devices and peripherals at IDC Asia-Pacific’s domain research and practice groups, said the threat to tablets is for now not a concern. This is because “theoretically speaking”, while tablets, as with other computing devices, are open to threats, the user base is not big.

“If you look at security threats, they tend to threaten the Windows platform, mainly because of the sheer number of users,” Ma noted.

Tablet usage, though, is on the uptrend. In a report released last November, research analyst Gartner predicted that media tablets will displace around 10 percent of PC units by 2014. A separate forecast from FBR Capital Markets indicated that 70 million of such devices will be sold this year, with a PC sale lost for every 2.5 tablets sold.

Secure tablet ecosystem takes many hands to clap
As more enterprises adopt tablets, Frost & Sullivan’s Yu agreed that vendors can look to incorporate into future models more security features, on top of the ability to communicate with other devices and technologies.

“It is critical for the tablet to take on more enterprise-class capabilities, be it support for enterprise apps or reaching the required performance levels,” he noted. “With mobility expected to characterize the office environment of the future, the tablet could find itself at the forefront of the enterprise mobile computing trend.”

One such tablet that is already perceived to be “safe”, is the Playbook by Research in Motion (RIM). The highly publicized but yet-to-be launched device, would have security functions built in, as RIM’s customer base tend to be businesses and IT managers, Ma of IDC said. Security protocols to protect sensitive data from unauthorized access, for instance, would be among such features, he explained.

Kwa, whose SaaS company develops human resource and collaboration apps for the Apple iPhone and iPad, said JustLogin’s apps communicate directly with the Web services hosted at their own servers, and no data is stored locally on the tablets.

“Before the user is able to access the data, the application will encrypt the password entered on the tablet and call one of the Web services. The validation is done through a series of handshaking protocols before the data is sent over,” he explained.

Handshaking protocols refer to technical rules a computer must observe to establish connection with another system.

Asked who should shoulder the responsibility to ensure a safer tablet ecosystem, both Kwa and Frost & Sullivan’s Yu said all parties–from hardware vendors to app stores and users–have their roles to fulfill.

While IDC’s Ma argued the hardware vendor’s responsibility is merely to make its product as attractive as possible, Yu said adding security features is the way forward, as vendors “can do their bit in protecting end users from cyberthreats since many consumers may not be as security-savvy.”

End users could limit information sharing on the Web, and enterprises “have to realize that tablets are still consumer-based, therefore these devices may not be safe for corporate adoption”, Yu cautioned.

Kwa pointed out that apps, too, have to be secure. To that end, he noted that Apple’s App Store is more secure than Web applications available on the Internet, as they are vetted before they are released for users to download.

“At least [the process] is controlled and there is an identifiable owner behind each application,” Kwa said.

Sophos: Spam to get more malicious

Spam is becoming more malicious in nature as trickery tactics change in line with current user interests, according to a new report released Tuesday by Sophos.

The security vendor’s “Dirty Dozen” report, reviewing global spam trends between October and December 2010, noted that more unsolicited e-mail messages were spreading malware and attempting to trick unsuspecting users into giving confidential data such as user names and passwords.

Sophos also noted an increase in more focused, targeted e-mail attacks, or spear-phishing. Cybercrooks continued to seek victims via social networks, with a growing number of reports of malicious apps, compromised profiles and unwanted messages spreading across social networking sites such as Facebook and Twitter.

“Spam is certainly here to stay, however, the motivations and methods are continuing to change in order to reap the greatest rewards for the spammers,” Graham Cluley, senior technology consultant at Sophos, said in a statement. “What’s becoming even more prevalent is the mailing of links to poisoned Web pages–victims are tricked into clicking a link in an e-mail, and then led to a site that attacks their computer with exploits or attempts to implant fake antivirus software.”

Traditional spam messages touting pharmaceutical products have not gone away either, Sophos noted. Tens of millions of Americans are believed to have purchased drugs from unlicensed online sellers, it added in the report.

Cluley noted: “As long as spammers continue to make money from these schemes, Internet users can be sure that they’ll continue to receive unsolicited e-mail and social networking scams.

“To combat this, it’s essential that computer users remain wary of clicking on unknown links, regardless of whether they appear to be on a trusted contact’s social networking page.”

US reigns as spam king
Europe and Asia were the top two continents of spam origin, with a combined share of 64 percent, while the United States continued to be the country responsible for the most junk e-mail. The U.S. accounted for 18.8 percent of spam messages worldwide in the previous quarter, and continues to be plagued by bots, or zombie PCs that are remotely controlled by hackers, Sophos said.

Three Asian nations made the latest Dirty Dozen list: India took second spot with a 6.9 percent share of spam relayed between October and December 2010; South Korea was No. 8 with 3 percent; and Vietnam, which accounted for 2.8 percent, clocked in at No. 10. The three countries have consistently been ranked among the Top 12 over the last year, according to Sophos.

Microsoft plugs three Windows holes, works on others

Microsoft today issued two bulletins fixing three holes in Windows, including one rated critical for Windows XP, Vista, and Windows 7 as part of Patch Tuesday.

“We are not aware of proof-of-concept code or of any active attacks seeking to exploit the vulnerabilities addressed in this month’s release,” the company wrote in a Microsoft Security Response Center blog post.

The critical vulnerability is addressed in Bulletin MS11-002. The bulletin fixes the critical hole and an “important” vulnerability, both in Microsoft Data Access Components, that could allow an attacker to take over the computer if a user merely viewed a malicious Web page.

The second bulletin, MS11-001, resolves an “important” vulnerability that could allow remote code execution if a user opens a legitimate Windows Backup Manager file that is located in the same network directory as a malicious library file. The user would have to visit an untrusted remote file system or WebDAV (Web-based Distributed Authoring and Versioning) share for the attack to be successful.

More details are in the security advisory for this month.

Meanwhile, Microsoft revised Security Advisory 2488013 related to Cascading Style Sheets (CSS) to add an additional workaround for a vulnerability that affects Internet Explorer and for which there have been reports of targeted attacks.

“The most important vulnerability, known as “css.css”, affects all versions of Internet Explorer and is rated critical,” said Wolfgang Kandek, chief technology officer at Qualys. “The exploit code is public and targeted attacks have been observed.”

Security experts said they were more interested in when Microsoft plans to patch existing zero-day holes than in the fixes that were released.

“Instead of talking about the number of bulletins being patched today, everyone’s mind is on the five vulnerabilities that are not being patched,” said Andrew Storms, director of security operations for nCircle.

Microsoft has a list of the pending issues here. On that list is a bug in IE disclosed by Google security researcher Michal Zalewski for which he said an exploit had been leaked to the Web. He also publicly released a tool he said he had used to find the hole and others in major browsers. Microsoft says it is still assessing the issues Zalewski brought up.

This article was first published as a blog post on CNET News.

US memo on insider threats leaked

A White House memo on how to improve data security in the wake of the publication of hundreds of thousands of leaked US documents on WikiLeaks has been leaked.

 

Leaked memo on WikiLeaks aftermath

The memo, which was circulated to the heads of U.S. government departments and agencies on Jan. 3, was handed to MSNBC news. The document was formulated in response to leaks to the WikiLeaks Web site by whistleblowers and designed for use by agencies handling classified material.

The memo asks whether government agencies that handle national security documents have adequate data security practices in place, including appropriate access controls. The document provides a checklist, with questions including whether disparate information about employee evaluations, polygraph tests and IT auditing of user activities, are pieced together to give indicators of insider threats. The memo also asks whether the agency uses psychiatrists and sociologists to gauge employee “despondence and grumpiness as a means to gauge waning trustworthiness”.

Read more of “US memo on insider threats leaked” at ZDNet UK.

China’s US$90B ups cyberwar stakes

Last year, Northrup Grumman released a report warning that China had a mighty cyber arsenal which it could use in a possible future cyber conflict. News last week that Chinese defense spending could be double the public figure could mean that such claims are true, and perhaps even conservative.

The news arose in diplomatic cables dating back to 2006 obtained from Wikileaks by Fairfax newspapers. Australian diplomats reported to the United States that the Australian Government believed China’s military budget was US$90 billion, double the US$45 billion publicly announced by Beijing.

Australian intelligence and defence agencies told the U.S. that China was building a military capability well above that needed to repel a move for independence by Taiwan, and said it had become a risk to stability in the region.

“China’s longer-term agenda is to develop ‘comprehensive national power’, including a strong military, that is in keeping with its view of itself as a great power,” the cables said.

A document (PDF) provided to the U.S.-China Economic and Security Review Commission by Northrop Grumman in October last year claimed that China’s had a significant cyber warfare capability, including a military and civilian militia comprised of network specialists, and fully-functional offensive hacking and counter-intelligence wings.

The document also claimed the country has stockpiled a kinetic arsenal that includes lasers, high-power microwave systems and nuclear-generated electromagnetic pulses to supplement its cyber warfare force. It also claimed the country is training its forces to work under “complex electromagnetic conditions”.

While it is unclear if defense specialists espousing China’s cyber warfare capabilities, such as Northrop Grumman, were privy to this information, the larger defense budget would seem to lend credence to their claims.

It’s something governments do not like to discuss. Last year, the United States opened its Cyber Command, but that is still heavily dependent on private industry. Meanwhile, the Australian Defence Force revealed in its Defence Whitepaper that it will “invest in a major enhancement of [its] cyber warfare capability”, yet that appears to centre on response and defensive means.

The extent and intent of cyber warfare arsenals is hotly contested and there are as many cyberwar sceptics as proponents.

Yet, it’s certainly reasonable to suggest China did not splurge US$90 billion on guns and bombs alone. In a time heavy with cyberwar rhetoric, it would make sense for them to hedge their bets.

This article was first published at ZDNet Australia.

Chinese auction site touts hacked iTunes accounts

Tens of thousands of reportedly hacked iTunes accounts have been found on Chinese auction site Taobao, but the company claims it is unable to take action unless there are direct complaints, according to news reports.

The Global Times reported Thursday as many as 50,000 illegally obtained iTunes accounts were sold on China’s biggest consumer auction site. The Beijing-based newspaper also interviewed a seller who admitted the accounts were hacked but did not reveal how they were obtained.

Taobao, however, said that to protect its users, it would not be taking action until it has received a formal request. In a statement carried by BBC, the company said: “We take all reasonable and necessary measures to protect the rights of consumers who use Taobao, of our sellers and of third-parties. Until we receive a valid takedown request, we cannot take action.”

Advertisements on Taobao for the iTunes accounts offer heavily marked down prices. One of the listings visited by ZDNet Asia allowed buyers to decide how much they wanted in the accounts, with US$1 in exchange for only 1 RMB (US$0.15). Buyers are required to purchase at least US$10 and at the time of writing, 175 transactions have been made.

Access to the iTunes account is, however, limited to 12 hours, according to the listing. It also cautioned buyers that apps bought via this means are not upgradeable and that it would be a matter of time before illegally acquired iTunes accounts are closed.

Apple had declined to comment on the news, according to BBC.

This is not the first time Apple iTunes accounts have been compromised. In July 2010, reports surfaced that customers accounts were hacked and used to purchase software. However, it is not clear whether the accounts being sold on Taobao are related to the previous incident.

Corporate data accessed by too many

With increasing ease of access to corporate data, organizations are in danger of “breaches” in the form of files, rather than database records, warned security vendor Imperva, adding that the number of affected companies is set to rise.

As more and more sensitive data gets disseminated as unstructured content, hackers may seek to take advantage of the loopholes, and make away with confidential data for financial or personal gains, Stree Naidu, Imperva’s Asia-Pacific vice president, told ZDNet Asia in an e-mail interview.

“While most business applications use structured storage such as databases to maintain and process sensitive and critical data, users are constantly creating and storing more unstructured content, based on the information taken from these systems,” he said.

Such information include data stored in excel spreadsheets, presentations and medical lab results sent as letters to patients. However, it is not merely the transfer of the information that is opening up loopholes and opportunities for unauthorized access, Naidu explained.

The documents do not actually need to be sent anywhere for a threat to exist. What we’ve observed, and the recent WikiLeaks incidents have shown, is that data is accessible by too many people within the business–people who do have a legitimate need for access, despite strict company policies,” he pointed out.

Therefore, reducing access rights to a business need-to-know level and monitoring access activity are some ways to mitigate the risk.

Furthermore, with data volume increasing at 60 percent every year, increased sharing of data, as well as data retention policies, are also contributing to the threat of security breaches, Naidu said.

The situation is further complicated by the fact that files are “autonomous entities”, which organizations do not have control of even with today’s tools, he added. Unlike database records, which are created by pre-programmed applications, the inability to maintain control of files “may result in excessive access privileges and an inadequate audit trail of access to sensitive information”.

Cloud-based software such as Google Docs and Jive, and internal document management systems such as Microsoft’s SharePoint or EMC’s Documentum becoming part of enterprise IT, have also upped the attack surfaces and, therefore, risk of threats.

The Wikileaks incident last year was a clear indication that “massive leakage and compromise of sensitive information is indeed becoming a clear and present danger”, according to Naidu.

Another case of high-profile breach involved a former Goldman Sachs employee, who stole source code used for a proprietary high-frequency trading program, by using his desktop to upload the code to a server based in Germany, Naidu noted.

The bank identified the misconduct after observing large amounts of data leaving the servers, which led to the rogue employee’s arrest.

With these in mind, Naidu said organizations ought to budget and plan for the next generation of file access monitoring and governance tools to reduce the risk of file exposure. Some key characteristics to take note of include:

  • Policies set and expressed by content of file, rather than metadata
  • Flexible deployment, without impacting data stores or network architecture
  • Adaptive deployment with focus on the most accessed files, without compromising the ability to track sensitive information in older files
  • Ability to identify file owners and excessive rights to files

The executive also advised that enterprises be constantly on the lookout as hacking methods are always “improving and evading detection”. Businesses, he urged, should increase monitoring visibility of traffic and setting security controls across all organization layers.

“A security control should understand these shifts in the hacker industry and rapidly incorporate these changes in their organization,” said Naidu. “This could even include incorporating a reputation-based control, which could stop large automated Web-based attacks known to originate from malicious sources.”

Spam drops sharply over Christmas

The amount of spam being pumped out by networks of compromised computers dropped sharply over the festive period, according to Symantec.

The security company’s subsidiary MessageLabs said the steep drop was in part due to spam coming from the Rustock botnet slowing to a trickle, while two botnets, Lethic and Xarvester, appear to have ceased activity.

“Rustock is sending spam in much-reduced volumes, while the other two botnets have stopped sending spam altogether,” MessageLabs intelligence senior analyst Paul Wood told ZDNet UK on Thursday.

Read more of “Spam drops sharply over Christmas” at ZDNet UK.

Microsoft to fix Windows holes, but not ones in IE

Microsoft said Thursday that it will release two security bulletins next week fixing three holes in Windows, but it is still investigating or working on fixing holes in Internet Explorer that have been reportedly exploited in attacks.

One bulletin due out on Patch Tuesday, rated “important,” affects only Windows Vista but the second one, with an aggregate rating of “critical,” affects all supported versions of Windows.

Microsoft said it is not releasing updates to address a hole affecting Windows Graphics Rendering Engine that it disclosed earlier this week, or one disclosed in late December, Security Advisory 2488013, that affects Internet Explorer and for which there have been reports of targeted attacks, the company said in a post on the Microsoft Security Response Center blog.

“We continue to actively monitor both vulnerabilities and for Advisory 2488013 we have started to see targeted attacks,” the post said. “If customers have not already, we recommend they consult the Advisory for the mitigation recommendations. We continue to watch the threat landscape very closely and if the situation changes, we will post updates here on the MSRC blog.”

Also not mentioned in the Patch Tuesday preview announcement by Microsoft is a bug in IE disclosed last weekend by Michal Zalewski, a security researcher for Google based in Poland. Zalewski released a tool he used to find the hole and others in all the major browsers and said that an exploit for the IE bug had been leaked to the Web accidentally. Security firm Vupen has confirmed the critical hole in IE 8. Microsoft says in Security Advisory 2490606 that it is investigating the bug reports.

Josh Abraham, a security researcher at Rapid7, was surprised that Microsoft was not rushing to fix holes that were reportedly being used in attacks.

“With only two bulletins this month, the big shock is that Microsoft is not addressing two security advisories that have already been weaponized,” Abraham said. “I would bet that if the malicious attackers start using the exploits, then we will see an out-of-band patch.”

Meanwhile, as Microsoft released its Patch Tuesday preview, Sophos is warning people about a fake Microsoft security update e-mail circulating that contained a worm. The subject line says “Update your Windows” and urges recipients to download an attached executable. But Microsoft does not issue security patches via e-mail attachments. Another clue that it’s a scam–Microsoft is misspelled in the forged e-mail header as “microsft.”

This article was first published as a blog post on CNET News.

Sourcefire buys Immunet for US$21M

Network security company Sourcefire is acquiring Immunet, a cloud-based anti-malware startup, for US$21 million in cash, the companies announced Thursday.

The acquisition expands the cloud-based offerings for Sourcefire, creator of the open-source Snort intrusion detection technology.

Columbia, Md.-based Sourcefire said it will not lay off any of Immunet’s full-time staff, which is based in Palo Alto, Calif.

Sourcefire paid US$17 million at the closing of the deal and will pay US$4 million during the next 18 months dependent on product delivery milestones, the companies said in a statement.

Immunet chief executive Oliver Friedrich co-founded SecurityFocus, which Symantec acquired in 2002, and Secure Networks, which McAfee bought in 1998.

The acquisition announcement comes on the heels of news Wednesday that Dell is acquiring SecureWorks.

This article was first published as a blog post on CNET News.

US govt e-card scam hits confidential data

A fake U.S. government Christmas e-card has managed to siphon off gigabytes of sensitive data from a number of law enforcement and military staff who work on cybersecurity matters, many of whom are involved in computer crime investigations.

According to news.softpedia.com, the rogue e-mail messages sent out on Dec. 23 last year had the subject “Merry Christmas” and purported to originate from a jeff.jones@whitehouse.gov address.

The body message read: “As you and your families gather to celebrate the holidays, we wanted to take a moment to send you our greetings.

“Be sure that we’re profoundly grateful for your dedication to duty and wish you inspiration and success in fulfillment of our core mission.”

This was followed by two links to the alleged greeting cards, which lead to pages hosted on compromised legit Web sites. Victims who clicked on the links were infected with a Zeus Trojan variant, which stole passwords and documents, and uploaded them onto a server in Belarus, reported krebsonsecurity.com.

The article also revealed that the latest attack bore the same technique to one uncovered last year, where 74,000 PCs were found to be part of a botnet. In the earlier incident, victim machines were controlled by Web sites registered with the same e-mail address. Alex Cox, principal research analyst with NetWitness, said the new case either involved the same person or copied the exact same technique.

Security blogger Mila Parkour pointed out that the “pack.exe” file downloaded by the Trojan was a Perl script converted to an executable file by way of a commercial application called Perl2exe. The pack program was responsible for stealing the documents on a victim’s computer and relaying the data to a file repository in Belarus.

Krebsonsecurity.com author Brian Kerb said: “The attack appears to be the latest salvo from Zeus malware gangs whose activities over the past year have blurred the boundaries between online financial crime and espionage, by stealing both financial data and documents from victim machines.”

He explained that this activity was unusual as most criminals using Zeus were interested in money-related activities, whereas the siphoning of government data was associated with advanced persistent threat attacks, the same category that of stuxnet attacks.

Some of the victims included an employee at the National Science Foundation’s Office of Cyber Infrastructure, an intelligence analyst in Massachusetts State Police and an employee at the Financial Action Task Force.

Another report by news agency AP said there was no evidence that the stolen classified information had been compromised.

Microsoft warns of Windows flaw affecting image rendering

Microsoft warned on Tuesday of a Windows vulnerability that could allow an attacker to take control of a computer if the user is logged on with administrative rights.

To be successful, an attacker would have to send an e-mail with an attached Microsoft Word or PowerPoint file containing a specially crafted thumbnail image and convince the recipient to open it, Microsoft said in its advisory, which also contains information on workarounds.

An attacker also could place the malicious image file on a network share and potential victims would have to browse to the location in Windows Explorer.

The flaw, which is in the Windows Graphics Rendering Engine, could allow an attacker to run arbitrary code in the security context of the logged-on user, meaning that accounts that are configured to have fewer user rights would be affected less.

The vulnerability affects Windows XP Service Pack 3, XP Professional x64 Edition Service Pack 2, Server 2003 Service Pack 2, Server 2003 x64 Edition Service Pack 2, Server 2003 with SP2 for Itanium-based systems, Vista Service Pack 1 and Service Pack 2, Vista x64 Edition Service Pack 1 and Service Pack 2, Server 2008 for 32-bit, 64-bit, and Itanium-based systems and Service Pack 2 for each.

Microsoft said it is not aware of attacks exploiting the vulnerability or of any impact on customers at this time. The company is working on a fix but did not indicate when it would be available.

This article was first published as a blog post on CNET News.

US agency hunts down international cybercrime ring

A Vietnam-based international cybercrime ring believed to be involved in identity theft, wire fraud and money laundering is the target of a U.S. law enforcement agency following the house raid of two Vietnamese students suspected to be “money transfer mules”, news agencies reported.

On Monday, technology news site ComputerWorld reported that the U.S. Department of Homeland Security (DHS)’s Immigration and Customs Enforcement (ICE) investigations unit had raided the house of two Vietnamese Winona State University exchange students and seized their documents and computer equipment.

The 22-year-old students, Tram Vo and Khoi Van, are suspected of working as money transfer mules for a Vietnam-based international cybercrime ring, having made more than US$1.2 million selling software, video games and Apple’s iTunes gift cards on eBay purchased with stolen credit card numbers, the report stated, citing the affidavit filed in support of the search warrant issued for the raid.

Both of them controlled more than 180 eBay accounts and more than 360 PayPal accounts, which were opened using stolen identities, noted a separate report by the Star Tribune, a Minnesota, U.S.-based spreadsheet.

ComputerWorld explained that the students had posed as eBay sellers using the stolen identities to sell discounted products such as Rosetta Stone software, video games, textbooks and Apple iTunes gift cards.

When a legitimate eBay buyer orders the products, they would purchase the items from a third-party merchant using stolen credit card accounts and request for the items to be sent to the buyer. However, the merchant would not able to claim the payment of the products as the owner of the stolen credit card will inform the relevant bank that the payment was an unauthorized transaction, the report stated.

Online retailers such as eBay, PayPal, Amazon, Apple, Dell and Verizon Wireless were among the high-profile victims, noted Star Tribune.

Cybercrime gangs’ growing sophistication
The DHS investigation on the Vietnamese cybercrime outfit, code-named “Operation eMule”, began in September 2009, according to the abovementioned affidavit.

In the document, DHS Special Agent Daniel Schwarz wrote: “The criminal ring makes online purchases from e-commerce merchants using stolen credit card information and then utilizes an elaborate network of mules based in the United States. The criminals get stolen credit or bank card numbers by hacking PCs or databases. In some cases, they simply buy the stolen personal information from underground online marketplaces.”

According to ComputerWorld, money mule networks are needed by cybercrime organizations to get the stolen money out of the country, which is the “hard part”. Mules working for the Vietnamese organization, for instance, would get their orders via a secured Web site that is available only to “vetted members”, Schwartz said. He added that the money involved in such crimes is “estimated to exceed hundreds of millions of dollars”.

Such sophisticated cybercrime rings are on the rise, too.

In October last year, authorities arrested more than 100 people in the U.S. and U.K. in connection with another money mule operation, which was operating out of Ukraine, the report stated. Then, scammers hacked into bank accounts, transferred money around and used mules to move the money offshore via services provided by payment companies such as Western Union.

A ZDNet Asia report in July last year also revealed that a Russian check-counterfeiting ring had netted US$9 million through a combination of malware, botnets, virtual private networks and money mules recruited online.

Microsoft warns of Office-related malware

Microsoft’s Malware Protection Center issued a warning this week that it has spotted malicious code on the Internet that can take advantage of a flaw in Word and infect computers after a user does nothing more than read an e-mail.

The flaw was addressed in November in a fix issued on Patch Tuesday, but with malicious code now spotted in the wild, the protection center apparently wants to be sure the update wasn’t overlooked.

Symantec underlined the seriousness of the flaw to ZDNet Asia’s sister site CNET’s Elinor Mills in November:

“One of the most dangerous aspects of this vulnerability is that a user doesn’t have to open a malicious e-mail to be infected,” Joshua Talbot, security intelligence manager at Symantec Security Response, said at the time. “All that is required is for the content of the e-mail to appear in Outlook’s Reading Pane. If a user highlights a malicious e-mail to preview it in the Reading Pane, their machine is immediately infected. The same holds true if a user opens Outlook and a malicious e-mail is the most recently received in their in-box; that e-mail will appear in the Reading Pane by default and the computer will be infected.”

Users of Microsoft Office should be sure to install the fix. You can use your Start menu to check for updates: Click the Start button, click All Programs, and then click Windows Update. Details of the MS10-087 update, including which software versions are affected, can be found here.

This article was first published as a blog post on CNET News.

Researcher reports apparent China interest in IE hole

A security researcher who created a tool he used to find numerous bugs in major browsers has released it to the public, saying the importance of its distribution is heightened by the leak to the Web of an unpatched vulnerability in Internet Explorer.

Michal Zalewski, a Google security researcher based in Poland, announced in a blog post that he was releasing a tool called “cross_fuzz” and said its distribution was a priority because at least one of the vulnerabilities discovered by the tool appears to be known to a mysterious third party.

“I have reasons to believe that the evidently exploitable vulnerability discoverable by cross_fuzz, and outlined in msie_crash.txt, is *independently* known to third parties in China,” Zalewski wrote in a separate post.

“While working on addressing cross_fuzz crashes in WebKit prior to this announcement, one of the developers accidentally leaked the address of the fuzzer in one of the uploaded crash traces. As a result, the fuzzer directory, including msie_crash.txt, has been indexed by GoogleBot,” he continued. “I have confirmed that following this accident, no other unexpected parties discovered or downloaded the tool.”

On December 30, there were two search queries from an IP address in China that matched keywords mentioned in one of the indexed cross_fuzz files, he said.

Of the 100 or so bugs Zalewski said he found in IE, Firefox, Opera, and browsers powered by WebKit, including Chrome and Safari, he said he notified the vendors or developers in July and that they are in varying stages of resolution. He provides a timeline for contacting Microsoft here, noting that his first contact on the matter was in May 2008.

“At this point, we’re not aware of any exploits or attacks for the reported issue and are continuing to investigate and monitor the threat environment for any changes,” Jerry Bryant, group manager for Trustworthy Computing response communications at Microsoft, said in a statement.

This article was first published as a blog post on CNET News.

Data breach affects 4.9 million Honda customers

Japanese automaker Honda has put some 2.2 million customers in the United States on a security breach alert after a database containing information on the owners and their cars was hacked, according to reports.

The compromised list contained names, login names, e-mail addresses and 17-character Vehicle Identification Number–an automotive industry standard–which was used to send welcome e-mail messages to customers that had registered for an Owner Link account.

Another 2.7 million My Acura account users were also affected by the breach, but Honda said the list contained only e-mail addresses. Acura is the company’s luxury vehicle brand.

According to Honda’s notification e-mail to affected customers, the list was managed by a vendor. All Things Digital suggested, but could not confirm, that the vendor in question is e-mail marketing firm Silverpop Systems, which has been linked with the recent hacking incidents including that of fast-food giant McDonald’s.

In a Web page addressing affected customers, Honda said it would be “difficult” for a victim’s identity to be stolen based on the information that had been leaked. However, it has warned that customers ought to be wary of unsolicited e-mail messages requesting for personal information such as social security or credit card numbers.

Compelling scams an ‘obvious danger’
Graham Cluley, senior technology consultant at Sophos, pointed out that cybercriminals who possess the list may e-mail the car owners to trick them into clicking on malicious attachments or links, or fool them into handing over personal information.

“If the hackers were able to present themselves as Honda, and reassured you that they were genuine by quoting your Vehicle Identification Number, then as a Honda customer you might very likely click on a link or open an attachment,” he explained in a blog post.

Acura customers, he added, could also be on the receiving end of spam campaigns.

Cluley noted that the incident serves as a reminder that companies not only need to have adequate measures in place to protect customer data in their hands, they also need their partners and third-party vendors to “follow equally stringent best practices”.

“It may not be your company [that] is directly hacked, but it can still be your customers’ data that ends up exposed, and your brand name that is tarnished,” he said.

Mozilla exposes older user-account database

Mozilla has disabled 44,000 older user accounts for its Firefox add-ons site after a security researcher found part of a database of the account information on a publicly available server.

The file had passwords obscured with the now-obsolete MD5 hashing algorithm, which has been rendered cryptographically weak and which Mozilla scrapped for the more robust SHA-512 algorithm as of Apr. 9, 2009. The older database didn’t end up anywhere dangerous, Mozilla believes.

“We were able to account for every download of the database. This issue posed minimal risk to users, however, as a precaution we felt we should disclose this issue to people affected and err on the side of disclosure,” said Chris Lyon, Mozilla’s director of infrastructure security, in a blog post about the database exposure Tuesday.

Mozilla notified affected users of the problem by e-mail yesterday, it said. “Current addons.mozilla.org users and accounts are not at risk,” Lyon said.

Password security has become a more prominent concern after a hack of Gawker blog sites earlier this month. Even with passwords obscured by strong hash algorithms, user names can be valuable in further hack attempts, especially when people reuse the same password on multiple sites.

“Unique passwords are a requirement, not a luxury,” said Chester Wisniewski of security firm Sophos in a blog post about the event.

This article was first published as a blog post on CNET News.

McAfee: Smartphones, Apple top ’11 crime targets

Security firm McAfee expects malicious activity in 2011 to target smartphones, URL shorteners, geolocation services like Foursquare, and Apple products across the board, according to a report released Tuesday.

“We’ve seen significant advancements in device and social-network adoption, placing a bulls-eye on the platforms and services users are embracing the most,” Vincent Weafer, senior vice president of McAfee Labs, said in a release announcing the report. “These platforms and services have become very popular in a short amount of time, and we’re already seeing a significant increase in vulnerabilities, attacks and data loss.”

In other words, the security infrastructure surrounding popular new services and devices–and more importantly public awareness of potential threats that people may face when using them–may not be up to par with better-established technologies. Take URL shorteners, for example. Because it’s so easy to mask longer URLs with them and because Twitter users have grown accustomed to clicking them without much thought, McAfee expects that they will continue to be targets for spam, scams, and viruses.

Social networks will remain hotbeds of malicious attacks, McAfee predicted, but geolocation services like Foursquare and Facebook Places will see new prominence. “In just a few clicks, cybercriminals can see in real time who is tweeting, where they are located, what they are saying, what their interests are, and what operating systems and applications they are using,” McAfee noted. “This wealth of personal information on individuals enables cybercriminals to craft a targeted attack.”

As for hardware, mobile devices (particularly those used on corporate networks), Internet TV platforms like Google TV, and devices running Apple operating systems are anticipated to be prime targets.

McAfee also said that the saga of WikiLeaks, the controversial classified-document repository that dominated headlines around the world late in 2010, is likely to spawn copycats in 2011. The security firm expects “politically motivated attacks” to be on the rise.

This article was first published as a blog post on CNET News.

Microsoft warns of IE zero-day

Microsoft has warned of a vulnerability that affects all versions of the Internet Explorer web browser.

Hackers can use the flaw to take control of a computer, Microsoft said in an advisory on Thursday.

“Microsoft is investigating new, public reports of a vulnerability in all supported versions of Internet Explorer,” said the advisory. “The main impact of the vulnerability is remote code execution.”

Read more of “Microsoft warns of IE zero-day” at ZDNet UK.

Lookout raises US$19.5 million for smartphone security

Lookout Mobile Security, which specializes in armoring smartphones from hackers, said today that it’s raised an additional US$19.5 million in funding.

The San Francisco-based startup says it now has nearly 50 employees and about four million registered users of its software, which includes a spyware scanner, remote backups, and a stolen phone locator. That’s up from a reported 2 million users in September and 3 million in November.

Lookout’s security apps currently are available for Android, BlackBerry and Windows Mobile. In an interview with ZDNet Asia’s sister site CNET, Lookout CEO John Hering said an iPhone version will be “coming very shortly” and customers should expect to “see something in 2011”.

New features in Apple’s iOS 4 operating system, announced in April and made available a few months later, aid development, Hering said. Those changes “enable us to do quite a bit more,” he said.

Some of Lookout’s features, like remote wipe and and a more comprehensive remote backup, are available only to customers who purchase the premium version for US$3 a month.

Wednesday’s funding round came from Index Ventures and existing investors Accel Partners and Khosla Ventures.

This article was first published as a blog post on CNET News.

Irate hackers bring down sports body’s Web site

The World Taekwondo Federation’s (WTF) Web site was hacked after it punished a Taiwanese fighter for cheating at the Asian Games, AFP reported.

According to the news agency, the South Korea-based governing body’s site was taken down on Tuesday night, defaced with the words “still unfair” by attacked who supported Taiwan’s taekwondo exponent, Yang Shu-chun. The report did not state how long the site was down for. It is now operational.

The seeds of the hackers’ discontent were sown during the Asian Games last month when Yang was found to have extra “detachable” sensors in her socks, an action considered to be illegal in the sporting event. Fighters are only allowed to wear sensors built into their socks, which are then used as part of the electronic scoring system, AFP explained.

Following weeks of investigation, the WTF decided on Tuesday to punish Yang’s wrongdoing with a three-month suspension from the sport. Additionally, her coach received a 20-month suspension, while the Chinese Taipei Amateur Taekwondo Association was fined US$50,000 for “negligence and wrongdoing” for its role in the chain of events.

The decision angered Yang’s supporters and triggered the attack on the governing body’s Web site, said AFP.

Taking the WTF’s site offline was not the first transgression by the hackers, though. Earlier, while investigations were still ongoing, the Asian Taekwondo Union’s Web site carried a statement condemning Yang for a “shocking act of deception”, the news agency reported.

The statement set off a wave of anti-Korean ire in Taiwan, which resulted in hackers bringing down the ATU’s Web site in November, it added.

APAC enterprises still not DDoS-aware

Distributed denial-of-service (DDoS) attacks have been around for at least a decade, with thousands of such incidents taking place each day around the world. But, a whopping 99 percent of these attacks go unreported, according to a security expert.

In light of recent high-profile WikiLeaks and consequent security incidents, Mark Teolis, general manager of DOSarrest, explained that while most large e-commerce sites have some level of protection, many are not adequate to deal with such assaults, especially complex layer 7 DoS attacks (L7DA), in an e-mail interview with ZDNet Asia.

Frost and Sullivan’s analyst, Edison Yu, agreed. He noted that this is the case particularly in the Asia-Pacific region, where instead of using an application firewall, many enterprises still rely on traditional firewall and intrusion prevention system (IPS) for protection against L7DA.

Yu explained that these sophisticated DDoS attacks are able to bypass the traditional firewall and target applications, bringing down Web sites due to an overwhelming volume of service requests being sent out by botnets.

The “Brute Force” program is said to be able to send more than 1 million attempts per second. L7DA also has the capability to slow down the HTTP server.

According to DOSarrest, the top misconception enterprises have is that traditional firewalls are able to thwart all DDoS attacks. The security vendor added that over the past 12 months, L7DA consisted of 60 percent of the overall DoS threat landscape, followed by SYN type floods which comprised 30 percent, and UDP/ICMP attacks taking 10 percent.

The company also revealed that 80 percent of DoS attacks had a layer 7 component, while the same percentage carried a combination of two or more components.

Teolis noted that “most purpose-built, so-called DDoS mitigation devices” will not stop all layer 7 attacks, but enterprises can thwart them by adopting a “robust multi-layer strategy”. This includes eliminating all non-essential traffic in the cloud, having good SYN protection and implementing a well-designed robust system for layer 7.

DOSarrest, which represents various merchants in different industries including pharmaceuticals, gaming and music downloads, revealed that one of its customers was a victim of “Operation Payback” during the WikiLeaks-related attacks but suffered zero downtime. A coordinated series of attacks comprising Internet activists that target opponents of online piracy, Operation Payback launched <a href=”http://www.zdnetasia.com/facebook-twitter-boot-wikileaks-supporters-after-visa-attack-62205075.htm&#8221; _cke_saved_href=”http://www.zdnetasia.com/facebook-twitter-boot-wikileaks-supporters-after-visa-attack-62205075.htm”>attacks on Web sites of banks</a> that withdrew its services from WikiLeaks.

Internet not built for trust
Yu, who has been tracking the developments of DDoS attacks, noted that what used to be reserved to drive “cyber espionage”, is now being exploited by cyber criminals to gain sensitive data or compromise monetary transactions.

He described it as a “two-way situation” where, increasingly, enterprises are migrating to the Web for commercial reasons. By making more information available online to provide employees and customers easy access, businesses are giving criminals greater opportunities to scrutinize system loopholes, thereby, making their sites more vulnerable, he said.

“The Wikileaks incident has emphasized that the Web was never designed as a trusted environment,” Yu cautioned. “I think that’s something we tend to forget when we go online and embrace the Web in personal and professional domains.”

Jonas Frey of Probe Networks, was quoted in a recent NetworkWorld article, saying that even as ways to mitigate and thwart attacks continue to emerge, attackers have also been successful in discovering new security loopholes. He added that there is “no real solution right now”.

“Nowadays the consumers have a lot more bandwidth and it’s easier than ever to set up your own botnet by infecting users with malware and alike,” Frey said in the report. “There’s not much you can do about the unwillingness of users to keep their software or operating system up-to-date. There is just no patch for human stupidity.”

While the figures paint a grim picture, Teolis believes the overall risk is still low. However, he noted that the landscape remains unpredictable.

Yu noted: “DDoS is becoming more and more contentious, given the nature and motivation behind the attacks, [and this is] something which enterprises are not very wary of.”

In a bid to minimize risk exposure, the analyst urged enterprises to relook access to the corporate network through mobile devices, and evaluate if their IT infrastructure is capable of handling these security threats.

As more criminals target layer 7 DDoS attacks, an increasing number of security vendors are launching service offerings that specifically target such risks. Kaspersky, for instance, recently announced plans to start selling an “experimented DDoS shield” globally if it is able to work effectively.

Sophos: Beware Facebook’s new facial-recognition feature

Facebook’s new facial recognition software might result in undesirable photos of users being circulated online, warned a security expert, who urged users to keep abreast with the social network’s privacy settings to prevent the abovementioned scenario from becoming a reality.

Graham Cluley, senior technology consultant at security vendor Sophos, said in a statement released Monday that the new facial recognition software introduced last week by Facebook have capabilities to match peoples’ faces in photos uploaded by other members. While users will not be automatically identified, or “tagged” in Facebook parlance, members who upload these pictures will be prompted to tag a list of suggested friends identified by the facial recognition software, Cluley noted.

Furthermore, he added that once a Facebook user has identified people to be tagged in a photo, these individuals run the risk of being singled out by the social networking site to other friends.

“Even people who are not on Facebook, or who choose not to identify themselves openly in uploaded pictures, may nevertheless end up [being] easy to find in online photos,” he explained.

In an earlier report, Facebook’s vice president of product, Chris Cox, told ZDNet Asia’s sister site CNET News that photo tagging is “really important” for control because every time a tag is created, it highlights a photo of the user which he was not aware had been uploaded online. “Once you know [this picture exists], you can remove the tag, or you can promote it to your friends, or you can write the person and say, ‘I’m not that psyched about this photo’,” Cox said.

He said the feature will be rolled out to about 5 percent of Facebook’s U.S. users this week and, “assuming that goes well”, the company will continue to launch it in other markets. He also stressed that there will be an opt-out option for the new feature, so if members do not want to show up in their friends’ tagging suggestions, they will not.

Cluley, however, spoke out against Facebook for maintaining an opt-out, rather than opt-in, stance toward user information. “While this feature may be appealing for those Facebook users that are keen to share every detail of their social life with their online friends, it is alarming to those who wish to have a little more anonymity,” he said.

He cited a recent Sophos poll that revealed 90 percent of Facebook users surveyed called for features on the social networking site to become opt-in. With the introduction of the facial recognition capability, he predicted that this percentage will rise.

To prevent privacy loss, Cluley recommended that users opt out when the feature is turned on. He added that keeping on top of new features and ensuring privacy settings are up-to-date is essential for Facebook users in order to make sure they do not share too much personal information online.

This is not the first time the social network has received flak for instituting an opt-out policy for its features. In March, Facebook users were up in arms after the site announced it would automatically share user data with a select group of third-party sites without specific permission.

Security expert suggests demilitarizing cybersecurity

perspective As if the wars on terror and drugs weren’t keeping U.S. officials busy enough, the drum beats of cyberwar are increasing.

There were the online espionage attacks Google said originated in China. Several mysterious activities with Internet traffic related to China. The Stuxnet worm that experts say possibly targeted Iranian nuclear centrifuges. An attack on the WikiLeaks site after it released classified documents damaging to U.S. foreign policy. And don’t forget the Internet attack on Estonia from a few years ago.

To deal with the geopolitical dramas that are projected in the online world, the U.S. is using military strategy and mindset to approach cybersecurity, creating a Cyber Command and putting oversight for national cybersecurity under the auspices of the Department of Defense.

But offense isn’t always the best defense, and it never is when it comes to Internet security, says Gary McGraw, author and chief technology officer at security consultancy Cigital. More secure software, not cyber warriors, is needed to protect networks and online data, he writes in a recent article, “Cyber Warmongering and Influence Peddling.”

ZDNet Asia’s sister site CNET talked with McGraw about how the militarization of cybersecurity draws attention from serious threats.

CNET: So, Tell me what’s wrong with going to DEFCON 1 in cyberspace now?
McGraw: I wrote an article with Ivan Arce, the founder and chief technology officer of Core Security Technologies. He’s from Argentina. Every time I talk to him he asks ‘what is up with you Americans and cyberwar anyway? Why are you so obsessed with cyberwar?’ Because nobody else is talking about it in the rest of the world. I travel a lot internationally and he is right. So we started talking about why that was. One of our main points is that there is a confusing blend of cyberwar stuff, cyber-espionage stuff and cybercrime stuff, and the stories are used to justify whatever political or economic end people may have, instead of trying to disambiguate these three things and talk about what they actually are.

What’s the danger with that?
The danger is that if we lump everything under ‘cyberwar’, then our natural propensity in the United States is to allow the Defense Department to deal with it. The DoD set up a Cyber Command in May. Cyber Command has an overemphasis on offense, on creating cyber-sharpshooters and exploiting systems more quickly than the enemy can exploit them. I don’t think that’s smart at all. I liken it to the world living in glass houses and Cyber Command is about figuring out ways to throw rocks more accurately and quickly inside of the glass house. We would all be better suited trying to think about our dependence on these systems that are riddled with defects and trying to eliminate the defects, instead.

Is the rhetoric all driven by attracting money? That’s a very cynical way of thinking.
A lot of people think it is. The military industrial complex in the U.S. is certainly tied very closely to the commercial security industry. That is not surprising, nor is it that bad. The problem is the commercial security industry is only now getting around to understanding security engineering and software security. The emphasis over the past years has been on trying to block the bad people with a firewall and that has failed. The new paradigm is trying to build stuff that’s not broken in the first place. That’s the right way to go. If we want to work on cybercrime and espionage and war, to solve all three problems at once, the one answer is to build better systems.

You mention that cybercrime and cyber-espionage are more important than cyberwar. Why is that?
Because there is a lot of crime, less espionage, and very little cyberwar. (chuckles) And the root cause for capability in all these things is the same. That is dependence on systems that are riddled with security defects. We can address all three of those problems. The most important is cybercrime, which is costing us the most money right now. Here’s another way to think about it: everyone is talking about the WikiLeaks stuff, and the impact the latest (confidential files) release is having on foreign policy in the U.S.

The question is, would offensive capability for cyberwar help us solve the WikiLeaks problem? The answer is obvious. No. Would an offensive cyberwar capability have helped us solve the Aurora problem where Google’s intellectual property got sucked down by the Chinese? The answer is no.

What would have helped address those two problems? The answer is defense. That is building stuff properly. Software security. Thinking about things like why on earth would a private (officer) need access to classified diplomatic cables on the SIPRNET (Secret IP Router Network)? Why? If we thought about constructing that system properly and providing access only to those who need it, then things would be much better off.

The term “cyber” makes it seem more scary. We’re just talking about Internet, right? Might there be a problem with semantics?
There could be. There has been an over emphasis on cyber war in the U.S. The problem with cybersecurity is that there is just as much myth and FUD and hyperbole as there are real stories. It’s difficult for policy makers and CEOs and the public to figure out what to believe because the hype has been so great, such as with the Estonia denial-of-service attack from 2007. So that when we talk about Stuxnet it gets dismissed.

So it’s the boy who cried wolf problem?
Yes.

Stuxnet is real. Is that cyberwar?
It seems like a cyberweapon. I think it qualifies as a cyberwar action. My own qualification is that a cyberattack needs to have kinetic impact. That means something physical goes wrong. Stuxnet malicious code did what it could to ruin physical systems in Iran that were controlling centrifuges or that were in fact centrifuges. If you look at the number of centrifuges operating in Iran you see some big drops that are hard to explain. (Iranian President Mahmoud) Ahmadinejad admitted there was a cyberattack on the centrifuges.

So why does the attack on Estonia not qualify?
The kinetic impact is important, but also an act of war is the act of a nation-state. The Estonia attacks fail the nation-state actor test. It also fails the real impact test. Sure, their network went down, but whoop dee do! Who cares? If you took that same sort of attack against Google or Amazon they wouldn’t even notice. I think people were using that attack–which was carried out by individual cybercriminals in Russia, not by the state–to hype up the cyber war thing. In fact, in my work in Washington [D.C.], the Estonia story keeps coming up, over and over again, as an example of cyberwar.

What is your qualification to discuss cyberwar matters and policy?
This year, I’ve been working more in Washington than I have in past. I’ve been to the White House, the Pentagon, talked to think tanks. I’m a little bit worried that the discourse is too much about cyberwar. We should try to untangle the war, espionage, and crime aspects and maybe emphasize building better systems and getting ourselves out of the glass house as opposed to trying make a whole new cadre of cyber-sharpshooters as [CIA Director] General Hayden suggests. For policymakers the conception of our field [of security] is muddled.

I’m worried we’re not spending on [Internet security] defense at all. There’s no way to divide and conquer networks. That is, we can’t defend the military network or the SIPRNET but not defend the Internet because we’re ignoring 90 percent of the risk. Most of the infrastructure in the U.S., 90 percent of it that’s important, is controlled by corporations and private concerns, not by the government. The notion that we can protect military networks and not the rest of it just doesn’t make any sense. That’s one problem.

The other problem is the Air Force has always been about domination in the air and taking away that capability from the enemy early and eradicating infrastructure. This notion of a ‘no-fly zone’ is kind of interesting. Unfortunately those tactics don’t work in cyberspace because there is a completely different physics there. There is no such thing as taking ground or controlling air space in cyberspace. Things move at superhuman speed in cyberspace. So some of these guys who are good military tacticians are having a hard time with cyberwar policy and cyberdefense because of the analogies they’re using.

You mentioned in your article that “in the end, somebody must pay for broken security and somebody must reward good security”. Are you suggesting that we hold software makers liable for flaws?
I don’t know what the answer is. We need to change the discourse to be around how do we incentivize people to build better systems that are more secure and how do we disincentive building of insecure systems that are riddled with risk? As long as we can have that conversation then policy makers might be able to come up with right sort of levers to cause things to move in the right direction. We’re not suggesting any particular approaches, like liability. We’re just trying to change the discourse from being about war to being about security engineering.

Anything else?
I think we are at risk and I do think cyberwar is a real problem we have to grapple with. But even though we are at risk, we need to have rational conversations about this. Too much FUD and hyperbole don’t do anything to help the situation. The poor guys that are charged with setting policy have a hard time doing that because we’re having the wrong conversation at the policy level right now.

This article was first published as a blog post on CNET News.

LinkedIn disables passwords in wake of Gawker attack

LinkedIn is disabling passwords of users whose e-mail addresses were included in the customer data that was exposed in an attack on the Gawker blog sites.

The professional-networking site is taking this action to prevent any of its customers from having their LinkedIn accounts hijacked in the event that they used the same password that they used on any of the Gawker sites.

“There is no indication that your LinkedIn account has been affected, but since it shares an e-mail with the compromised Gawker accounts, we decided to ensure its safety by asking you to reset its password,” the company said in an e-mail to users today.

To reset your LinkedIn password, go to the Web site and click on “Sign In” and “Forgot Password?” and follow the directions.

Gawker’s Web site and back-end database were compromised, and passwords, usernames, and e-mail addresses for about 1.3 million user accounts were posted on the Pirate Bay Bit torrent site over the weekend. The passwords were encrypted with technology. However, weak passwords can easily be cracked by brute force attacks. (To find out how to check if you are at risk and get more details about the incident read this FAQ.)

People who use the same password on multiple sites are at risk of having their accounts on those other sites compromised. This happened already on Twitter, with some accounts being used to send spam shortly after the Gawker breach was publicized.

Security experts urge people to choose strong passwords, to change them often and to not use the same password on multiple sites.

This article was first published as a blog post on CNET News.

New scam tactic: Fake disk defraggers

We’ve all heard about fake antivirus programs, also known as scareware. These programs falsely claim that your computer is infected with malware and prompt you to buy a product that will do nothing for you, except put your credit card number into the hands of criminals.

Well now there are fake disk defraggers that masquerade as applications that fix disk errors on a computer. In a blog post, the GFI Labs (formerly Sunbelt Software) blog Dubbed the programsFakeAV-Defrag rogues and said they had names like HDDDiagnostic, HDDRepair, HDDRescue, and HDDPlus.

It would appear that the scammers are trying out the new programs to see which might best confuse potential victims and evade detection by legitimate antivirus software. The defragger clones emerged last month with names like UltraDefragger, ScanDisk and WinHDD and which pretended to find “HDD read/write errors. Earlier this month, there was PCoptimizer, PCprotection Center, and Privacy Corrector that were more generic security products rather than specifically antivirus, the post says.

Computer users should be suspicious of applications that are advertised via e-mail, pop up warnings about problems (especially immediately after you click on a Web page video), demand that you make a purchase before it will fix the problems, and prompt you to update your browser, GFI Labs said.

If you aren’t sure if a program is legitimate, you can search by the name on a search engine or onGFI Labs’ site.

This article was first published as a blog post on CNET News.

Microsoft to boost Office security

Microsoft plugged 40 holes with 17 patches on Tuesday and said it will improve the security of Office 2003 and Office 2007 by adding a feature to the older versions of its productivity software that opens files in Protected View.

Customers should focus on the two critical bulletins that are part of Microsoft’s monthly Patch Tuesday security update, says Jerry Bryant, group manager for response communications in Microsoft’s Trustworthy Computing Group. The first is MS10-090, a cumulative update for Internet Explorer. It fixes seven vulnerabilities in the browser and affects IE 6, 7 and 8. There have been attacks targeting IE 6 on Windows XP, Bryant said.

The other critical bulletin is MS10-091, which fixes several vulnerabilities in the Windows Open Type Font driver. It affects all versions of Windows, primarily on third-party browsers that natively render the Open Type Font, which IE does not, according to Bryant.

The other bulletins are not critical and “could potentially be put off until after Christmas”, he said in an interview with CNET. Windows (all supported versions), Office IE, SharePoint, and Exchange are affected by the bulletins. Details are in the security advisory here and in the Microsoft Security Response Center blog post.

Meanwhile, the company will be porting Office File Validation, which is currently in Office 2010, to Office 2003 and Office 2007 by the first quarter of next year, Bryant said. It will be an optional update.

The move will help protect customers from attacks that target about 80 percent of the Office vulnerabilities, Bryant said. Attackers typically create a document that uses an exploit and e-mail the maliciously crafted document to potential victims or host it on a Web site and prompt people to open it.

Office File Validation checks the file-format binary schema, such as .doc or .xls, and opens the file in a protected view if it detects a problem. “If the user wants to edit or continue to open the document then there are severe warnings about what that might mean” and that it could be dangerous, Bryant said.

This article was first published as a blog post on CNET News.

McDonald’s warns customers about data breach

McDonald’s (U.S) is warning customers who signed up for promotions or registered at any of its online sites that their e-mail address has been compromised by an unauthorized third party.

The customer name, postal address, phone number, birth date, gender, and information about promotional preferences may also have been exposed, the company said in an FAQ on its Web site. Social Security numbers were not included in the database, the company said.

The data was managed by an e-mail database management firm hired by Arc Worldwide, a “longtime business partner” of McDonald’s, according to a recorded message on the company’s toll-free number. The unnamed database management firm’s computer systems were improperly accessed by a third party, McDonald’s said.

McDonald’s did not disclose the number of records involved or when the breach happened. McDonald’s representatives did not immediately return a call seeking comment this morning.

“This incident has nothing to do with credit card use at the restaurants,” the FAQ says. “The database that was accessed by the unauthorized third party did not contain any credit card information or any other financial information. Further, the information in the database was not gathered from our restaurant registers, but from voluntary subscriptions to our websites or promotions.”

McDonald’s is informing customers by sending e-mails to people who subscribed on the sites and has notified law enforcement authorities. The company advised customers to be wary of anyone calling them reporting to be from McDonald’s and to report it to the company if that happens.

This article was first published as a blog post on CNET News.

Malware for smartphones is a ‘serious risk’

Businesses and consumers are at risk of data breaches through smartphone use, according to the European Network and Information Security Agency .

Data leakage and disclosure, phishing and spyware are among the more common risks, the European Network and Information Security Agency (Enisa) said in a report.

The report focused on threats posed to the end user, company employees and high-level company officials–people that use smartphone devices for managing disparate aspects of their lives.

Read more of “Enisa: Malware for smartphones is a ‘serious risk’” at ZDNet UK.

Akamai says it can withstand Anon attacks

Akamai managers say they could have bolstered the Web sites that buckled under attacks launched recently by Internet vigilantes.

The world’s largest content delivery network says it has enough servers and the right kind of network to “mitigate distributed denial-of-service (DDoS) attacks”, Neil Cohen, Akamai’s senior director of product marketing told ZDNet Asia’s sister site CNET. DDoS describes the practice of overwhelming a Web site with traffic so that it can’t be accessed.

Some well-known sites were the targets of DDoS attacks launched by a loosely connected group of WikiLeaks supporters who call themselves Anonymous or Anon for short. The group lashed out at companies they consider to be hostile to WikiLeaks, the service responsible for publicizing an enormous amount of classified U.S. government documents. Some of those attacked were MasterCard, Visa, PayPal, and Amazon.

MasterCard, Visa, and PayPal stopped processing donations made to WikiLeaks while Amazon stopped hosting WikiLeaks servers. At this point it appears that Amazon was able to withstand the attack while MasterCard and Visa’s sites were inaccessible for extended periods.

Cohen said few other companies have as much experience as his with defending Web sites from this kind of threat. He said that late last month, a number of U.S. retail sites came under DDoS attack from multiple different countries. Cohen said he was unaware of who was behind it or why, but he said that Akamai helped some of the retailers withstand the onslaught of hits to their sites, which in some cases reached to 10,000 times the normal daily traffic to some of these sites. None of the sites went down, he said.

“What we did over the last decade was built out our network and we now have 80,000 servers in 70 countries,” Cohen said. “We can mitigate DDoS attacks by having a server extremely close to the court rather than try to absorb the attack in one centralized location. As an attack grows in size and distributes out to more bots, we have a server near the compromised machines. As the attack gets bigger, our network scales on demand.”

While there are reports that Anonymous is giving up on DDoS attacks related to the WikiLeaks case, it is unlikely that we’ve seen the end of them. In retaliation against the entertainment industry’s antipiracy attempts, Anonymous knocked out the Web sites belonging to the Motion Picture Association of America, the Recording Industry Association of America, Hustler magazine, and the U.S. Copyright Office.

This article was first published as a blog post on CNET News.

App firewall helps counter DDoS threats

With cyberattacks getting more sophisticated, enterprises that rely on Web applications should look to application firewall for better protection, particularly against distributed denial-of-service (DDoS) attacks, urged a security expert.

Vladimir Yordanov, director of technology at F5 Networks, explained that with 80 percent of attacks hitting Web apps these days, traditional protection such as the conventional perimeter system firewall offers very little protection. Such systems are the reason why DDoS-type attacks are successfully executed to compromise Web sites and payment systems, he added.

“Tradition systems, such as intrusion prevention or intrusion detection systems, cannot block effective requests as these are not easily detected. The attacks targeting coding or browser flaws are usually let through, and it is the application firewall’s job to weed out bad traffic,” Yordanov noted during an one-on-one interview with ZDNet Asia on Monday.

Typically, the application firewall responds by sending a cookie or response to ensure the user is real and sending a valid request, before allowing access into its system, the security expert pointed out. In many instances of DDoS attacks used recently against PayPal, MasterCard and Visa, requests are sent out by botnets, or zombie machines, and these computers are not able to respond to requests, he added.

According to earlier reports, this series of attacks–codenamed “Operation Payback”–were initiated by supporters of jailed WikiLeaks founder Julian Assange, whose Web site has been shut down by Internet service providers, Web hosting companies and payment providers across the U.S. and Europe.

As a form of protest to the treatment of WikiLeaks and Assange, supporters made use of 3,000 voluntary computers and up to 30,000 hacked machines to shut down the Web sites of PayPal, Mastercard and Visa, which had earlier deemed WikiLeaks to be a criminal organization and denied it their services.

No foolproof solution
Besides creating app firewalls, other forms of protection that enterprises could look at include “clean pipes” from ISPs that filter out bad traffic and putting in place a high level network security, Yordanov pointed out. Also, enterprises can sanitize their protocols, ensure that all information needed to establish the connection is present before allowing access, he added.

However, as security technology is constantly evolving, hackers and cybercriminals have managed to find ways to compromise systems, and this is made worse by the increasing access of networks from mobile devices. Yordanov let on that the more dispersed a workforce is, the greater risk of an attack, which is currently a situation that criminals are exploiting.

Conceding that no solution is 100 percent foolproof, the executive said the best way for a system to be kept safe from attacks is to have the system shut down.

“Rather than having the Web site be compromised, it’s better to have it shut down completely,” Yordanov said. “If the engineers are able to trace the IP addresses of where the requests are sent, they can also eliminate the sources by blocking the addresses, but only if they are static. But increasingly, these requests change frequently, so it is not that useful.”

The F5 director noted that while shutting down the system is helpful, the option is suited only for enterprises with enough manpower to constantly monitor Web traffic.

Cloudy security prospects
When quizzed on the level of security for cloud computing, the IT expert expressed pessimism at the current situation, but said things will improve given time.

He revealed that he had personally gone through SLAs (service level agreements) offered by six cloud providers, but none made commitments to protect customers’ data.

“One even asked for all of your data, but there is no procedure that tells you how to get it back, and how they actually protect the data,” Yordanov noted. “[Protection agreements] are all worded loosely now.”

He went on to say that the industry is still at an early stage, rather like e-commerce when it first started. The executive expects to see a similar “revolution” within cloud computing to spur adoption, though.

In the meantime, many large enterprises are eyeing the private, rather than public, cloud, he said. That is because cloud providers are not sure if they can fully guarantee the safety of their clients’ data, so private cloud deployments are a way of shielding themselves from potential legal action, Yodanov added.


Filet-O-Phish: details stolen in McDonald’s hack

McDonald’s has lost thousands of customer details to a hacker, including names, phone numbers and street and e-mail addresses. The fast food chain is also warning of pending phishing scams.

The customer details were lost after a hacker broke into the fast-food restaurant’s U.S. marketing partner and stole the details provided by customers who sign up for promotions.

McDonald’s was concerned that the hacker might use the details to conduct phishing scams.Phishing scams are fraudulent email campaigns run by criminals to steal financial and identity information, or infect users computers with malware.

“In the event that you are contacted by someone claiming to be from McDonald’s asking for personal or financial information, do not respond and instead immediately contact us… McDonald’s would not ask for that type of information online or through e-mail,” the company wrote on its website.

“Law enforcement officials have been notified and are investigating this incident.”

The company apologized for the breach.

McDonald’s spokesperson Bronwyn Stubbs said Australian customers were not affected.

An e-mail provider hired by promotion company Arc Worldwide was responsible for the loss, which did not include credit card data or social security numbers.

This story was first posted in ZDNet Australia.

Gawker wrestles with reader data breach, hacking

Gawker.com has apparently been the victim of a pair of security compromises last weekend, one of which put reader’s data at risk.

The tech gossip site informed readers last week in a blog post that its database of reader commenting accounts had been compromised and urged its users to change their passwords:

Our user databases appear to have been compromised. The passwords were encrypted. But simple ones may be vulnerable to a brute-force attack. You should change your Gawker password and on any other sites on which you’ve used the same passwords.

We’re deeply embarrassed by this breach. We should not be in the position of relying on the goodwill of the hackers who identified the weakness in our systems. And, yes, the irony is not lost on us.

Later in the day, it was revealed that the site itself was compromised as well when a post appeared on the site reportedly linking to the site’s source code at The Pirate Bay. The story appeared under the byline of Gawker writer Adrian Chen, but Chen tweeted that he had not written the story and the site had been hacked.

Gawker representatives did not immediately respond to a request for additional information.

This article was first published as a blog post on CNET News.

Symantec: DDoS attacks hard to defend

It has surfaced that the distributed denial of service (DDoS) attacks on Visa and MasterCard Web sites on Wednesday were carried out by a toolkit known as low orbit ion cannon (LOIC).

In an e-mail interview with ZDNet Asia, Ronnie Ng, senior manager for systems engineering at Symantec Singapore, explained that LOIC is a network stress testing application that attempts a DOS attack on the target site by flooding the server with TCP, UDP and HTTP requests. The intention here is to disrupt the service of a particular host.

It is widely understood that there are free attack toolkits readily available on the Web, and LOIC is one of them.

“There are many applications out there that are capable of carrying out such attacks, some of which are legitimate, depending on the user’s intention, and can be found with a simple search,” Ng added.

“However, there are many underground tools also designed for malicious use that can be utilised efficiently with methods such as botnets. Even a simple tool that sends out small packets can have a great impact if used collectively,” he said.

While the DDoS form of attack is not new, the security expert gave consolation that cyber criminals are not always one step ahead of protection that Web merchants have today.

Ng said: “Attackers are constantly looking for ways to get the information they are after. This varies from using DoS to exploiting vulnerabilities–low or high severity ones–to compromise a system.”

He added that as protection technologies continue to evolve to provide maximum protection, proper patch management and user awareness of today’s cyber threats are necessary to ensure a higher security stand.

While it is possible to maintain high-level security for the payment merchants, Ng admitted that difficulties remain in defending against typically distributed DDoS attacks.

Online merchants will need to audit gateways and firewall rules to ensure they are capable of dealing with small-scale everyday attacks and have comprehensive policies in place to defend themselves against large-scale attacks,” he said.

Some of these policies can include more aggressive packet filtering, setting adjustments to determine how and when packets may be dropped, implementation of rules for IP addresses, and IP address block blacklisting when certain thresholds are reached, the expert recommended.

Visa and MasterCard’s sites were hacked on Wednesday by a network of 15,000 online activists, who coined the attack “Operation Payback”. This was carried out in retaliation of the credit card companies and PayPal’s announcement that they would no longer process donations toWikiLeaks.The hackers also tried to hit Amazon.com, but failed.

The group of hackers, called Anonymous, have vowed to target British government Web sites if WikiLeaks founder, Australian Julian Assange, was extradited to Sweden, where he is wanted over allegations of sexual assault. Assange is now in remand in the U.K. over rape charges.

In a separate development, several ex-members who participated in the WikiLeaks program have said they are planning to launch a new site, known as OpenLeaks, to continue to support whistle-blowing activities.

In the Netherlands, Dutch police confirmed the arrest of a 16-year-old teenager who has admitted to participating in the attacks.

Microsoft to plug critical IE, final Stuxnet Windows holes

Microsoft said today that next week’s Patch Tuesday will bring 17 updates plugging 40 holes and featuring two rated “critical”, including one in Internet Explorer (IE) that was targeted in attacks last month.

The critical IE vulnerability was written for IE6 and 7 but IE8 is also vulnerable, Microsoft said when it issued a warning about it in November.

Also fixed on Tuesday will be the final of four holes in Windows that the Stuxnet malware used.

“This is a local Elevation of Privilege vulnerability and we’ve seen no evidence of its use in active exploits aside from the Stuxnet malware,” Mike Reavey, director of the Microsoft Security Response Center, said in a blog post.

Windows (all supported versions), Office IE, SharePoint, and Exchange are affected by the bulletins, today’s advisory says.

This brings Microsoft’s total bulletin count for the year to a record 106, Reavey said. He attributed that to vulnerability reports in Microsoft products increasing slightly and older products “meeting newer attack methods, coupled with overall growth in the vulnerability marketplace”.

“Meanwhile, the percentage of vulnerabilities reported to us cooperatively continues to remain high at around 80 percent; in other words, for most vulnerabilities we’re able to release a comprehensive security update before the issue is broadly known,” Reavey wrote.

This article was first published as a blog post on CNET News.

Debit cards a magnet for fraud

Debit card fraud has increased dramatically in the year to June 2010 thanks to an explosion of ATM (automated teller machine) skimming.

The cost of skimming fraud has rocketed by 94 percent to more than AUD$22 million (US$21.56 million) since 2009 and accounts for 79 per cent of debit card fraud.

Debit cards are vulnerable to ATM skimming, where fraudsters replace the terminals with devices capable of reading PINs and stealing account information from the magnetic strips.

Figures from the Australian Payments Clearing House show incidents of fraud on magnetic stripe debit cards used for EFTPOS PIN transactions have jumped to about 3 in every 1000 transactions, or some 84,000 in the year ending June 2010.

The cost of that fraud over the same period has risen to close to AUD$28 million (US$27.44 million), from 7.4 cents to 10.7 cents in every AUD$1000 (US$980.1) transacted.

An industry source told the Australian Financial Review that the spike in ATM skimming was caused by a string of scams targeting McDonald’s restaurants in which criminals replaced handheld EFTPOS devices with replicas capable of transmitting account details via Bluetooth.

But the same figures show the cost of fraud affecting credit cards with embedded chips has dropped from 60.1 cents to 58.6 cents in every AUD$1000 (US$980.1) transacted, and the likes of Visa and MasterCard are chuffed.

“This is great news for cardholders and merchants alike and shows that the industry investment in chip is paying off,” Visa’s local general manager Chris Clark said.

The clearing house is more sobering; it points out that while the cost of credit card fraud has dropped, the amount of fraud has increased.

It attributes the rise to moves by banks to lower the threshold value of fraud investigated, meaning banks will detect more but cheaper fraud.

The drop in the value of fraud detected coincides with a push by MasterCard and Visa to drive the use of contactless credit cards such as payWave and PayPass, which bypass identity confirmation measures for transactions less than AUD$100 (US$98.01).

The system uses a fast wireless system to process the transactions and does not transmit account information, according to the system’s developers.

While fraudsters have moved away from scamming credit cards, they are having a field day with vulnerable online shoppers.

Fraud targeting Internet, mail or phone shoppers–where citation of credit cards is not required–has surged by 25 per cent to AUD$102.6 million (US$100.56 million).

It accounts for more than half of all frauds on credit, debit and charge cards, according to the clearing house.

The clearing house said better IT security in line with adherence to the Payment Card Industry(PCI) Data Security Standard (DSS) is critical to reduce online or “card-not-present” fraud.

The house’s chief executive officer Chris Hamilton said that Australia had a lower incidence of fraud than other nations: “Australia [is] less attractive for fraudsters from other countries.”

This article was first published at ZDNet Australia.

Facebook, Twitter boot WikiLeaks supporters after Visa attack

A hacker group that calls itself “Anonymous” says it took the Visa Web site down on Wednesday in retaliation for the credit card company suspending payments to the WikiLeaks site.

Earlier Wednesday the group hit the MasterCard site with a distributed denial-of-service attack for the same reason, and it took down PayPal over the weekend. The MasterCard site was back up this afternoon.

“IT’S DOWN! KEEP FIRING!!!” the group tweeted on its Operation Payback campaign page.

On Tuesday, Visa said it was suspending payments to the controversial whistle-blower site, joining MasterCard and PayPal.

Operation Payback also said its page had been banned from Facebook for violating terms of use, and late Wednesday afternoon the group’s Twitter account was suspended as well. Attempts to reach the group’s Twitter page displayed a warning that said “Sorry, the profile you are trying to view has been suspended.” A Twitter representative declined to comment on the matter.

Facebook bans pages that are “hateful” or “threatening” or which attack an individual or group, according to a warning Operation Payback posted to Twitter. A Facebook spokesperson provided this statement: “Specifically, we’re sensitive to content that includes pornography, bullying, hate speech, and threats of violence. We also prohibit the use of Facebook for unlawful activity. The goal of these policies is to strike a very delicate balance between giving people the freedom to express their opinions and viewpoints–even those that may be controversial to some–and maintaining a safe and trusted environment.”

Meanwhile, Icelandic hosting company DataCell EHF said it will take legal action against Visa and MasterCard over their refusal to process donations for WikiLeaks. DataCell said that it had been losing revenue as a result of those actions.

WikiLeaks has come under attack since it posted its latest release of about 250,000 confidential U.S. diplomatic cables to the Web last month, embarrassing officials and incurring the wrath of foreign leaders. That release followed posting of cables related to the U.S. operations in Afghanistan and Iraq earlier in the year.

As U.S. politicians cry foul and WikiLeaks’ payment and infrastructure providers cut their ties to the beleaguered site, supporters have stepped up efforts to keep the site up, creating mirrors of the site, and enacting revenge on those companies that turn their backs on the project.

While that war is being waged, Julian Assange, the public face of WikiLeaks, is behind bars for accusations not believed to be directly related to WikiLeaks. He was arrested on Tuesday in London on allegations of sexual assault in Sweden. Assange says he and the Web site are being unfairly punished for telling people what their governments are doing.

Asked for comment, Visa said in a statement Wednesday that its processing network that handles transactions was functioning normally but that its Web site was down. “Visa’s corporate Web site–Visa.com–is currently experiencing heavier than normal traffic. The company is taking steps to restore the site to full operations within the next few hours.”

Anonymous’ Operation Payback account on Twitter having been suspended and at 3 p.m. to include comments from Visa and Facebook.

This article was first published as a blog post on CNET News.

PC quarantines raise tough complexities

The concept of quarantining PCs to prevent widespread infection is “interesting, but difficult to implement, with far too many problems”, said security experts.

It was mentioned by Microsoft’s security chief Scott Charney that ISPs could be allowed toquarantine infected PCs in “infection wards” to ensure the machine is cleared of malware before allowing connection to resume.

In an e-mail interview with ZDNet Asia, Michael Sentonas, McAfee’s CTO for Asia-Pacific, questioned the effectiveness of cutting Internet connection off a computer, when updates on security software and operating system patches can be done only online.

“There is also the issue around educating consumers or non-security professionals on what to do if they are infected and quarantined. Many non-security trained Internet users understandably leverage the Web to resolve issues. How are they going to achieve this without Internet [access]?” asked Sentonas.

Other uncertainties pertaining to resolution may also be difficult to ascertain, such as once the machine is remediated, who releases the computer from quarantine and who determines the machine is safe, he asked.

Sentonas also likened to the concept of not allowing an unsafe car to go on the roads so others are protected, which ESET’s senior research fellow David Harley said works up to a point”. However, he added that success would depend on individual implementations.

While enterprises have used [the concept] for years to protect their own networks, home users who are also the system administrators are often “ill-equipped” for such a role, Harley commented. But he admitted that such an approach could have a significant mitigating impact, subject to the diagnostic accuracy of the ISP, which very often could be a hit-and-miss situation.

Should the quarantine action be adopted, the question of where it should be done and what the standards and procedures should be can be tricky when conditions differ from country to country, and are dependant on the contract between the consumer and ISP, both experts said.

As Sentonas pointed out, the situation in an enterprise is less complicated than that of a home user, as “configuration of individual systems may be standardized and regulated centrally”. To deal with home PCs, however, raises numerous possibilities and complexities with the different systems and applications.

Legally, Harley was concerned with loss of earnings due to quarantining a PC. “If the PC is infected, VoIP may be impacted. [The question then is whether] the total loss of VoIP access would put the user in a precarious position. Consider the situation where the user does use some software, paid or even free. What appeal process does he have?”

On the other hand, this “walled garden” approach may be a revenue stream for security providers supplying contracted services to other service providers, said Harley. That said, if it is being used as a marketing tool for the security provider, this might create illegal problems.

“Indeed, we’re already seeing instances where fake support services circumvent legislation that regulates cold calling by ‘solving’ security problems on the victim’s PC, but for a fee,” explained the ESET research fellow.

“The walled garden approach can be said to be ‘grooming’ end users for this sort of abuse,” he added, noting that banks could in the future require the use of approved security measures before allowing a customer to connect to its servers.

Latest Communications News

Posted: December 9, 2010 in Communications

Next Windows Phone 7 update gets small delay

Citing hiccups following the rollout of last month’s Windows Phone 7 software update, Microsoft is pushing back the release date of the update that will bring Windows Phone users new features.

“I believe it’s important that we learn all we can from the February update,” wrote Eric Hautala, Microsoft’s general manager of Customer Experience Engineering, in a post on the Windows Phone blog. “So I’ve decided to take some extra time to ensure the update process meets our standards, your standards, and the standards of our partners. As a result, our plan is to start delivering the copy-and-paste update in the latter half of March.”

The news is likely to be unwelcome to those who were looking forward to finally getting their hands on the copy-and-paste feature Microsoft first unveiled all the way back in October, as well as some of the speed improvements the company detailed at the Consumer Electronics Show in January. That update had originally been destined to reach users in the first two weeks of March, leaving just four days from now for Microsoft to deliver.

Even with the changes, Hautala said that this does not change the launch time frame of the much larger update, due sometime in the next three months.

“This short pause should in no way impact the timing of future updates, including the one announced recently at Mobile World Congress featuring multitasking, a Twitter feature, and a new HTML 5-friendly version of Internet Explorer Mobile,” Hautala said.

The now infamous February update Hautala had been referring to was meant to prepare phones for this first update that will bring copy and paste, among other additions. It ended up leaving some users with Samsung devices unable to update their system software, with the process hanging just beyond the halfway point. In some cases this left users with an unusable device. Microsoft then pulled the update to make fixes, before re-releasing it. Even then, however, a handful of users still ran into problems.

All told, Microsoft had said that about 10 percent of customers were running into problems with the update. That includes other problems such as not being able to download the software due to Internet connectivity issues, as well as not having enough onboard storage, the company had said.

“Let me be crystal clear: We’re not satisfied when problems prevent you from enjoying the latest Windows Phone updates,” Hautala wrote. “When we find an issue, we study and fix it. To that end, we’re carefully studying the current update process and will apply the lessons learned from it to all future ones. This is how we get better.”

Are you paying too much to surf overseas?

Are you a frequent traveler and feel you’re paying way too much to access the Web while overseas?

ZDNet Asia, along with ZDNet Australia and ZDNet UK, are running an online survey concurrently in our respective regions to find out how our readers utilize mobile broadband abroad on their smartphones, tablets, laptops and other mobile devices.

Data roaming, as it is commonly described, is taking off as adoption of mobile devices and Web access via mobile platforms continue to see significant growth across the globe. In fact, an Ovum study predicts that, by 2015, 1 billion users worldwide will use only their mobile devices to access the Internet, where the Asia-Pacific region will account for 518.4 million of the overall population.

So, do take 10 minutes to complete the survey and tell us if you think your operator is doing enough to deliver affordable data roaming usage and charges to subscribers who want to remain connected during their travels.

We will discuss the results in a special report once the poll ends. Start the survey now.

Android to dethrone Symbian in APAC

Nokia’s strategy to go with Windows Phone 7 for its smartphone operating system (OS) will likely cost the company its “undisputed” position as the market leader in the Asia-Pacific region, excluding Japan, as early as 2011, according to a new report.

In a statement released Thursday, IDC predicted that devices running Google’s Android OS could overtake those powered by Symbian “as soon as this year”, given that Nokia’s Windows Phone 7 devices are not expected to be available in the market until the end of the year.

The Finnish phonemaker announced in February that it is partnering Microsoft to bring the Windows Phone 7 OS to its smartphone range. However, support for Symbian will still continue, the company had reassured.

IDC reported that from this year onwards, “a lot more” brands will come out with Android-based devices at a lower price point. This will not only buoy the demand for smartphones in emerging markets but will also encourage feature phone users in all markets to consider upgrading to smartphones, the research firm added.

Smartphone shipments in the region is expected to hit 137 million units this year, IDC said, noting that this is the first time shipments will surpass the 100 million mark.

Total mobile phone shipment, which include feature phones and smartphones, will grow at a five-year compound annual growth rate (CAGR) of 34 percent in the region. Shipment will nearly double in five years’ time to reach 942 million units, up from 551 million units in 2010, said IDC.

According to IDC, smartphones will grow eight times as fast as feature phones to reach 359 million units by 2015. By that time, three in five mobile phones shipped will be smartphones, in contrast to one in five in 2010.

Melissa Chau, research manager for client devices at IDC Asia-Pacific’s domain research group, said in a statement: “Smartphones were a hot item in 2010, with more than double the shipments of 2009. In 2011, IDC expects this fire to keep burning.”

The Singapore-based analyst attributed the growth of smartphones to mobile phone vendors racing to get consumers on higher-margin devices and mobile platform stakeholders’ battle to woo app developers. She added that operators are also pushing smartphones to drive mobile data revenue.

A separate report from Canalys last month revealed that global shipments of Android phones had overtaken Symbian-based devices during the fourth quarter of 2010.

Canalys earlier this year also predicted that, globally, the Android platform will grow twice as fast as its rivals this year.

Tata Comms launches cloud platform in S’pore

SINGAPORE–Tata Communications has launched its infrastructure-as-a-service (IaaS) cloud offering in Singapore, and is targeting to derive US$250 million in revenue from cloud services over the next three years.

Singapore is the second country after India, the telecoms player’s home market, to offer InstaCompute, David Wirt, Tata Communications’ global head of managed services and senior vice president, said at a briefing here Tuesday. Driven out of its local data center Tata Communications Exchange (TCX), the cloud service will also cater to neighboring markets such as Malaysia, Hong Kong, Thailand, Indonesia, Vietnam and the Philippines.

According to Wirt, Tata Communications has identified a market opportunity in cloud offerings and expects such services to bring in US$250 million in revenue over the next three years.

“We’re betting Tata Communications on the cloud,” he said. “We really believe that telecommunications service providers have an advantage in this market.”

Carriers, noted Wirt, have the advantage over non-carrier cloud providers as network latency is not an issue. He added that even traditional cloud providers are buying wholesale connectivity from Tata Communications as they understand that the network is the enabler for cloud.

Wirt said a competitive differentiator of InstaCompute is its Web management portal which allows companies to easily govern their cloud initiatives. Administrators are able to establish different projects and set a threshold for each user based on the budget allocated to the project, he said. The system can automatically send out alerts that a user is reaching an assigned threshold or even turn off the account to prevent overspending.

The executive did not name Amazon Web Services as a competitor in the region, but admitted Tata Communications uses AWS as a benchmark.

According to Wirt, after InstaCompute was launched in India, 55 to 60 percent of InstaCompute customers are from India, while the majority of clientele outside of India hail from the United States and Singapore.

Vinod Kumar, managing director and CEO of Tata Communications, noted that InstaCompute is targeted at companies of all sizes. Kumar said small and midsize businesses will likely run all applications on the platform while large enterprises will use it for non-mission critical apps or as a sandbox for testing applications.

During the briefing, Aroon Tan, managing director of Magma Studios, shared his experience hosting the company’s latest massively multiplayer online role-playing game (MMORG) on the InstaCompute platform. He said the move to cloud computing eliminated the need to guess the rate of business growth in order to purchase physical servers, as now virtual machines can be turned on when needed.

Microsoft’s contract with Nokia rumored at $1B

It’s been less than a month since Microsoft and Nokia announced their strategic partnership that will see the two companies working together in a number of areas, though mainly mobile phones. One detail that was not disclosed at the time was what kind of dollar investment Microsoft had promised Nokia for developing and marketing Nokia-made handsets that will ship with Microsoft’s Windows Phone OS.

That detail has been made a bit clearer with a report by Bloomberg earlier Monday saying that Microsoft plans to pay Nokia more than US$1 billion, while Nokia, in turn, pays Microsoft a licensing fee for each copy of Windows Phone 7, as well as the right to use some of Microsoft’s expansive patent portfolio.

In addition, Microsoft is said to be paying some of its investment long before the first Nokia phones running Windows Phone 7 go into the sales channel.

The deal, Bloomberg’s Dina Bass says, will run for more than five years and has not yet been signed.

A Microsoft representative declined to comment on the matter. Nokia did not immediately respond to a request for comment.

Qt no more
In addition to the reported financial details of the Nokia and Microsoft deal, Nokia announced earlier Monday that it would be selling off its Qt application development framework business. Qt had let application developers create apps that run on both Symbian and MeeGo, two mobile operating systems that Nokia is pushing aside to put the focus on Microsoft’s Windows Phone OS.

Nokia picked up Qt in its US$150 million acquisition of Trolltech in 2008. Buying it from Nokia is Finland-based Digia, which says it’s going to set up subsidiaries in the U.S. and Norway to run Qt-related commercial licensing and operations businesses for the nearly 3,500 companies that currently use its Qt commercial licensing. The close of the sale is set for later this month for an undisclosed sum.

The move is not the death of Qt, and Nokia will continue to be involved with serving Qt commercial licensees, wrote Sebastian Nyström, who is the vice president of Qt and Webkit along with being the head of MeeGo for Nokia.

“Although Digia will now be responsible for issuing all Qt Commercial software licenses and for providing dedicated services and support to licensees, Nokia’s Qt technical support team will support and work closely with Digia for the next year,” Nyström said. “We will now begin work with Digia to ensure a smooth transition of all licenses and commercial relationships.”

The new ownership will also bring some extra features to the platform Nyström said.

“Digia will invest significant resources in the ongoing development of Qt as a commercial framework. In particular, their plans include emphasizing Qt in the desktop and embedded environments and exploring new support models and feature requests,” Nyström explained. “Commercial customers can also expect improvements in support and functionality for older platforms that were not on the Nokia development road map. If you are a holder of a Qt commercial license you can expect to hear more about this soon.”

Operators in emerging markets feel network pressure

Operators from emerging markets are boosting their mobile networks to handle growing traffic from smartphones and mobile broadband devices, but an industry observer says they should relook their current business strategies to stay relevant.

Arun Bansal, Ericsson head of Southeast Asia and Oceania, told ZDNet Asia that the region has seen an influx of smartphones and mobile data growth, driving operators expand their mobile networks in terms of coverage and capacity.

In a separate interview, David Chambers, Amdocs’ product marketing manager, concurred that the growth of mobile broadband is putting a strain on operators’ network.

However, despite the rush to boost their 3G infrastructure, Chambers said it is not technically possible for operators to build out capacity fast enough to meet forecasted demand.

He noted that these service providers are instead looking at Wi-Fi or femtocells to help offload data traffic, pointing to China Mobile’s plans to deploy 1 million WiFi hotspots as an example.

Customer experience a differentiator
According to Chambers, customer experience will play a big role in boosting an operator’s competitive edge. He explained that operators previously focused on selling the latest smartphones in the market because consumers’ selection of a mobile operator was “90 percent based on the device and 10 percent on networks”, he said.

This scenario will change, said Chambers, as users will increasingly choose their operator based on the quality of its networks. “Unless you are with the right network, [having the phone is] less useful,” he added.

He also pushed for operators to offer tiered data plans instead of unlimited data plans since they will need to ensure their networks can cater to loads that cannot be determined. Contrary to consumer belief that unlimited data plans are better, he said customers will appreciate charges that are “more directly related to what they think they should pay for” instead of paying a higher premium for unlimited data plans.

Instead of offering a general billing system, operators should also provide ways for customers to check in real-time how much data they are using, he said, noting that operators should cap customers’ data traffic when they reach the data limit instead of abruptly cutting them off.

Zeus fraud gang trial in the UK hits another delay

Plea hearings for 11 people arrested for their part in an an alleged multimillion-pound Zeus fraud ring in the U.K.  have been delayed because the prosecution is still trying to assemble evidence against them.
The complex case is thought to involve a gang operating across a host of countries from Russia to the United States. It has left U.K. prosecutors sifting through a mass of computer logs and financial records that will not now be served as evidence in their entirety until Apr. 1, and has led to several postponements of plea hearings.
Eleven east Europeans attended Croydon Crown Court on Friday to enter pleas against charges of conspiracy to defraud and money laundering. They are alleged to have committed the crimes using the Zeus Trojan.

Read more of “Zeus fraud gang trial hits another delay” at ZDNet UK.

Apple gives developers iOS 4.3 Gold Master

Apple has given developers the Gold Master copy of iOS 4.3, which is slated to go out to users as a free download at the end of next week. The Gold Master is typically the same build users get when the software is released.

The software update was formally unveiled during Wednesday’s iPad 2 event. Developers had first gotten their hands on it in mid-January.

Among the new features that come with iOS 4.3 are support for Home Sharing (which lets you play your iTunes library from anywhere in the house), the capability to turn your iPhone into a Wi-Fi hot spot, improved AirPlay support, and a new JavaScript engine for Safari that Apple says brings Safari mobile up to speed with its Mac OS X counterpart.

Other iPad-specific improvements include a software toggle to turn the switch on the right side of the device into either a mute button, or the screen orientation lock switch–functionality Apple had changed with a previous software update.

Apple said that only the iPad, iPhone 4, iPhone 3GS, and third- and fourth-generation iPod Touch devices will be eligible for the software update.

China to track cell phones for traffic reasons–really

A Chinese government committee announced plans this week to try to ease vehicle traffic congestion by monitoring the whereabouts and movement of millions of mobile phones.

“Aha!” you might say, cynically thinking it’s a ruse by the government to conduct surveillance on its citizens. But that kind of surveillance is already being done there (as it is in the U.S.).

If you had been in the gnarly 62-mile traffic jam that took nine days to clear up near Beijing last August you wouldn’t be so suspicious of the news. Beijing, an urban hub in northern China, has a population of more than 22 million.

“In Beijing, where [I’m from], the traffic is a nightmare,” Andrew Lih, an associate professor at the University of Southern California’s Annenberg School of Communication and Journalism, told ZDnet Asia’s sister site CNET today. “They are going from the 1930s to the 1980s in one-fifth the time…It’s a genuine announcement and there’s a real need for it, but it seems creepy in American eyes.”

The announcement from the Beijing Science and Technology Commission talks about publishing real-time information based on cellular base station technology that can determine how far and in what direction the phones are traveling. The system can target specific congested areas and include public transit systems. Eventually, commuters will be able to get specific information about their routes that can be used to make more efficient travel plans.

It’s not clear from the announcement exactly how the system will work, but it likely involves triangulating an approximate location of a phone based on signals between the device and cell towers in the area. This may or may not involve the GPS (Global Positioning System) in the phone itself.

“GPS is useful, but isn’t necessary at this stage; if the cell tower wants it, it can get it,” said Don A. Bailey, a senior security consultant at iSec Partners.

“Overall, what they’re doing (in China) is not at all strange. They can get as much location information as they want now, so they wouldn’t have to create some new program to get it. They’d just get it,” he said.

Sure, there is the potential for misuse, but, again, that’s nothing new. Telecom providers can see the phone number associated with a phone and get access to the billing information, all of which must be turned over to the government if agents come knocking on the door, according to Bailey.

“Not everything China does is underhanded and shady,” he said.

StarHub launches data roaming management tool

SINGAPORE–Local telco StarHub intends to make it easier for customers to monitor their data roaming usage with Roam Manager, an unstructured supplementary service data (USSD) command which they can key into their phone to receive related information.

StarHub subscribers can access the free service from today by typing *100# on their handsets.”We are seeing an increasing number of data bill shock complaints,” Joanna Chan, vice president of personal solutions at StarHub, said without revealing specific figures at the product launch here Wednesday.

Chan noted that more people are traveling overseas and there is a general lack of awareness when it comes to managing data roaming costs.

Web browsing, e-mail access, mobile applications, social networks and video streaming are the most popular functions that require a data connection, she said.

Aside from checking daily data roaming costs, Roam Manager also provides users with information such as contact numbers to emergency hotlines and local embassies. They can also opt to receive notifications when data roaming usage hits a specified amount in a day. These “warning signals” are available in four levels, with amounts varying between S$20 and S$100 for Level 1, and between S$200 and S$1,000 for Level 4. The existing alert triggers when usage reaches S$100.

By the end of this month, StarHub customers will also be able to suspend and reconnect their data roaming service directly via Roam Manager.

Along with the new service, the operator also introduced four new monthly data roaming plans, which cost between S$30 (10MB) and S$200 (100MB), for 21 countries around the world. These will complement the existing data plan with a daily cap of S$15 in 11 Asia-Pacific countries.

Service providers will differ for users who opt for the monthly plan instead of the daily model. For example, in Hong Kong, the daily plan is supported by a tie-up with Hutchison Telecommunications, while the monthly plan will be tied to either CSL or China Mobile HK.

Singapore’s two other mobile operators, SingTel and M1, also offer similar daily plans with prices capped at S$20 and S$15, respectively, for post-paid customers.

M1 said an SMS alert can be sent to customers when they hit 5MB of data usage. Subsequent alerts, the local mobile operator added, will be sent at 20MB, 40MB and 100MB intervals. SingTel also offers the same SMS alert service when usage hits 5MB, 15MB and 25MB.

This article was first posted in CNETAsia.

Gartner: Consider alternative networking vendor

The networking market has changed over the last decade, with more viable players capable of competing with frontrunner Cisco Systems, according to an industry analyst, who notes that switching to a different vendor has its advantages.

In an interview with ZDNet Asia, Mark Fabbi, vice president and distinguished analyst at Gartner, said the networking landscape has moved from a seller’s market dominated by Cisco ten years ago, to a more competitive environment today populated with more players. Toronto-based Fabbi was speaking at a media briefing hosted by Hewlett-Packard last week.

“If you look back at the last decade, Cisco really set the terms and conditions of the market,” the Gartner analyst noted. “It was the one providing the messages and directions in the market, as well as setting the price-points in the marketplace both for equipment and services.”

The landscape, however, has changed in the last few years with “true viable competition” coming from vendors equipped with broad portfolios as well as good service and support, he said. Hewlett-Packard with its acquisition of 3Com and move into the enterprise sector, and efforts in ramping up its technology and capabilities, are among the challengers Cisco now faces, he added.

Instead of defaulting to Cisco, Fabbi said enterprises should shortlist products from other vendors as well build a better network and save money.

“No vendor, no matter who they are, is best at everything.” 

— Mark Fabbi
Gartner

He pointed out that some IT organizations are unwilling to consider alternative vendors because they are comfortable with the current system or believe it is too difficult to switch partners. However, the latter is a perception rather than reality, the analyst noted.

Cisco, however, remained unfazed.

In an e-mail interview with ZDNet Asia, a Cisco spokesperson said the company “has always enjoyed healthy competition in the networking market”. This is no different now, she added.

“Customers have consistently spoken with their wallets,” she said, pointing out that Cisco remains the vendor with the biggest market share globally for managed switching, enterprise routing and network security based on findings by Dell’Oro.

Benefits of different vendor
According to Fabbi, a benefit of procuring products from other vendors is that enterprises are able to build a better network–one built to fit the requirements of the company.

“No vendor, no matter who they are, is best at everything,” he pointed out. “Enterprises have to start answering, ‘Why am I buying this technology? What problems is it solving? Should I look at other vendors?”

Economic pressures have also led enterprises to shortlist alternative vendors, instead of just Cisco, for equipment refresh, he added. That said, enterprises should not use price as a determining factor for switching vendors, he added.

“Saving money is nice but [it should not be] not the primary reason for the enterprise to look around and compare vendors,” cautioned Fabbi.

Instead, organizations have to make sure the network built is the right size for the company.

“In some cases, you may find you will spend more money in some places and less in others,” he explained. “By doing an analysis, you can make the right choice.”

Contrary to perceptions that customers are locked in by Cisco’s proprietary technologies, Fabbi said the networking giant’s lack of integration between its acquired products makes it is easier for competitors to “infiltrate and sell into parts of the Cisco infrastructure”.

“Cisco grew by acquisition,” he said. “Despite the fact that it sells a lot of things, operationally, [the products] all look and behave a little bit different.” Citing Cisco’s Catalyst and Nexus families of switchers as an example, Fabbi said: “A Cisco network is as multi-vendor as another network [built] with [products from] Juniper Networks, HP, F5 Networks or some other vendor.”

He added that there are some elements in Cisco products, such as the Cisco Discovery Protocol (CDP), in which “it continues to try to maintain proprietary capabilities [even though there are industry] standards”. Customers that want choice and openness may, as a result of this, turn to other vendors, he said.

Cisco: Innovation key ingredient
In response to this observation, the Cisco spokesperson said the company “has consistently pursued a standards-based approach to innovation”–whether it is products from the Cisco Catalyst or Nexus family line, or its architectural approach to “borderless networks and the unified fabric in the data center”.

She added that Cisco addresses its competition by “leading with innovation”. “Cisco is focused on innovation and on solving our customers’ problems. We let our customers decide who is best for their business,” she said.

To drive innovation in its product, the networking company spends over 10 percent of its revenues in research and development, she noted, adding that the company last year spent US$5.3 billion on product development.

‘Social Network’ disappoints at Oscars

Its fortunes didn’t fare quite so well as the company it was based on: “The Social Network,” a controversial recounting of the origins of Facebook, did not win the Oscar for Best Picture at the 83rd Annual Academy Awards tonight. As many had been expecting, the award went instead to historical drama “The King’s Speech”.

“The Social Network” also failed to win Best Director (that also went to “The King’s Speech”), Best Cinematography, Best Sound Mixing, and Best Actor, where Jesse Eisenberg’s portrayal of Facebook founder Mark Zuckerberg fell in favor of “King’s Speech” lead actor Colin Firth. In the Best Actor category, Eisenberg had not been expected to win (in addition to Firth, he was up against the likes of Jeff Bridges and Javier Bardem), but director David Fincher had had a good shot at Best Director and the film was widely considered the front-runner for Best Picture until buzz about “The King’s Speech” started to escalate.

The Fincher-directed film did, however, win Best Film Editing, Best Original Score for the music written by Trent Reznor and Atticus Ross, and Best Screenplay Adaptation for Aaron Sorkin’s acclaimed script.

The hype surrounding “The Social Network” had hit a fever pitch in the weeks before its release, and some critics say that it reached a point of overhype that ultimately made it a less palatable choice for the voters in the American Academy of Motion Picture Arts and Sciences. Some pundits also said that alleged factual inaccuracies–Facebook has decried its portrayal of Zuckerberg as a mean-spirited, near-pathological manipulator of human social connections–may have hurt its chances with the Academy.

That said, “The King’s Speech” was also hit by some claims of twisted history.

Facebook initially fought against the unauthorized “The Social Network” (and the book it was based on, Ben Mezrich’s “The Accidental Billionaires”). But as its release date grew closer, the company changed its tune and said that while Facebook still considered the film “fiction,” that it was an entertaining piece of cinema–Zuckerberg himself has said that he hoped it would inspire young people to pursue careers in computer science, and as a surprise prank appeared alongside Eisenberg in an episode of “Saturday Night Live”.

US domain name veto dumped

The Obama administration has failed in its bid to allow it and other governments to veto future top-level domain names, a proposal before ICANN that raised questions about balancing national sovereignty with the venerable Internet tradition of free expression.

A group of nations rejected (PDF) that part of the U.S. proposal last week, concluding instead that governments can offer nonbinding “advice” about controversial suffixes such as .gay but will not receive actual veto power.

Other portions of the U.S. proposal were adopted, including one specifying that individual governments may file objections to proposed suffixes without paying fees and another making it easier for trademark holders to object. The final document, called a “scorecard”, will be discussed at a two-day meeting that has started in Brussels.

At stake are the procedures to create the next wave of suffixes to supplement the time-tested .com, .org, and .net. Hundreds of proposals are expected this year, including .car, .health, .love, .movie, and .web, and the application process could be finalized at a meeting next month in San Francisco of ICANN, or the Internet Corporation for Assigned Names and Numbers.

Proposed domain suffixes like .gay are likely to prove contentious among more conservative nations, as are questions over whether foreign firms should be able to secure potentially lucrative rights to operate geographical suffixes such as .nyc, .paris, and .london. And nobody has forgotten the furor over .xxx, which has been in limbo for seven years after receiving an emphatic thumbs-down from the Bush administration.

“We are very pleased that this consensus-based process is moving forward,” a spokeswoman for the U.S. Commerce Department said in a statement provided to CNET over the weekend. “The U.S., along with many other GAC members, submitted recommendations for consideration and as expected, these recommendations provided valuable input for the development of the new scorecard.”

GAC is the Governmental Advisory Committee of ICANN and composed of representatives of scores of national governments from Afghanistan to Yemen. The Commerce Department’s National Telecommunications and Information Administration, or NTIA, serves as the committee’s representative from the United States.

ICANN representatives did not respond to a request for comment.

Milton Mueller, a professor of information studies at Syracuse University and author of a recently published book on Internet governance, says an effort he supported–complete with an online petition–“shamed” GAC representatives “into thinking about the free expression consequences” of a governmental veto.

“When I started this campaign, I knew that the Department of Commerce could never defend what they were doing publicly,” Mueller said. “There are also potential constitutional issues.”

Complicating the Obama administration’s embrace of a governmental veto was its frequently expressed support for Internet freedoms including free speech, laid out in Secretary of State Hillary Clinton’s speech last January. Clinton reiterated the administration’s commitment to “the freedom to connect” again in a speech in Washington, D.C. this month.

One argument for the veto over new-top level domains is that it could fend off the possibility of a more fragmented Internet, which would likely happen if less liberal governments adopt technical measures to prevent their citizens from connecting to .gay and .xxx Web sites. In addition, handing governments more influence inside ICANN could reduce the odds of a revolt that would vest more Internet authority with the United Nations, a proposal that China allies supported last year.

“I suspect that the U.S. government put (the veto power) in there to show that it wants to respect the wishes of governments,” said Steve DelBianco, executive director of the NetChoice coalition. “I think the U.S. would prefer to see a string rejected rather than let it get into the root and have multiple nations block the top-level domain.”

DelBianco, whose coalition’s members include AOL, eBay, Oracle, VeriSign, and Yahoo, said “blocking creates stability and consistency problems with the Internet…The U.S. government was showing a preference for having one global root.”

Today’s meeting in Brussels between the ICANN board and national government, which appears to be unprecedented in the history of the organization, signals a deepening rift and an attempt to resolve disputes before ICANN’s next public meeting beginning March 13 in San Francisco. (The language of the official announcement says the goal is “arrive at an agreed upon resolution of those differences.”)

A seven-page statement (PDF) in December 2010 from the national governments participating in the ICANN process says they are “very concerned” that “public policy issues raised remain unresolved.” In addition to concern over the review of “sensitive” top-level domains, the statement says, there are also issues about “use and protection of geographical names.”

That statement followed years of escalating tensions between ICANN and representatives of national governments, including a letter (PDF) they sent in August 2010 suggesting that “the absence of any controversial [suffixes] in the current universe of top-level domains to date contributes directly to the security and stability of the domain name and addressing system.” And the German government recently told (PDF) ICANN CEO Rod Beckstrom that there are “outstanding issues”–involving protecting trademark holders–that must be resolved before introducing “new top-level domains”.

WAC stores to co-exist with major app stores

Telco-supported mobile app shops established by the Wholesale Applications Community (WAC) can co-exist with existing app stores operated by platform owners such as Apple and Google, but not without some challenges, say analysts.

Comprising 68 members from the telecom industry as well as handset manufacturers, WAC aims to provide a “wholesale” platform offering apps that are developed to run on multiple devices. It was commercially launched at last week’s Mobile World Congress in Barcelona.

In a phone interview with ZDNet Asia, Marc Einstein, industry manager at Frost & Sullivan, noted that WAC app stores are able to co-exist with other major OS-specific app stores in the short-term period. A vast majority of mobile phones are not supported by an app store, Einstein noted, adding that out of the 1.6 billion phones shipped last year, only about 300 million units were smartphones.

Daryl Chiam, principal analyst at Canalys, concurred that WAC app stores can co-exist with major app stores. To compete with existing app stores and boost the use of WAC app store, Chiam said operators need to ensure the store comes preinstalled in the phones they sell.

Based on its latest specifications, one of the benefits WAC apps are touted to offer is billing integration with the operator’s network–a capability many app stores currently lack, he said. WAC app stores also allow operators the opportunity to resell apps and increase their mobile revenue, he added.

However, Chiam noted that all is not rosy for the WAC ecosystem. He explained that developers will need to sacrifice user experience for “write once, run everywhere” apps to cater to the different platforms. To address this challenge, he suggested developers figure out how to increase user engagement.

Einstein added that players involved in promoting WAC need to ensure there are enough compatible devices in the market to support demand for its apps.

Market in developing markets
According to the Frost & Sullivan analyst, a bigger opportunity for these carrier-supported app stores lies in the developing markets. He noted that emerging markets are not as saturated with app stores, specifically, Apple’s App store or Google’s Android Market. A previous report from Frost & Sullivan noted that smartphone sales in the Asia-Pacific region accounted for 54 percent of total devices sold in 2010, up from 9 percent in 2009.

George Huang, vice president of Huawei Software Technologies, concurred.

In an interview with ZDNet Asia, he noted that the WAC platform can offer more apps for mobile users in emerging markets as most of them cannot afford expensive smartphones.

However, Huang believes that WAC app stores can also persevere in developed markets and compete against existing app store operators by offering users a wider choice of applications.

He added that app stores can co-exist, pointing to operators such as China Mobile and China Telecom which have included applications from Nokia’s Ovi Store in their own app stores.

Ninety percent of Windows Phones updating fine

Microsoft has provided more detail into the number of phones that are having problems with a software update it began to roll out at the beginning of the week.

Speaking to ZDNet about reports that some phones were becoming unusable after the update, a Microsoft representative said the company had seen a 90-percent success rate by customers who were attempting to install the update.

“Of the remaining 10 percent, the top two issues encountered are the result of customer Internet connectivity issues and inadequate storage space on the phone or PC,” the company representative said. “These account for over half of the reported issues with this update.”

Reports of problems with the update, which had been pushed out to phones to help prepare them for the first of two updates that will add new features, began appearing shortly after the update began to make its way into the hands of users. Microsoft had sent out notifications about the update to users in waves, letting some grab the updated software before others.

Users with Samsung devices appear to have captured the brunt of the problems. Microsoft responded by temporarily pulling the update for Samsung Windows Phone users. For some updaters, the process hung just past the halfway point, leaving them with a non-functioning device. Microsoft yesterday told news site WinRumors that it had identified the cause of the problem, but had pulled the update as a precaution until a fixed version could be sent out.

Microsoft is urging those users with phones that had been left unusable after the update to contact their mobile operator or device manufacturer for repair options. In the meantime, the Hardware 2.0 blog over at ZDNet has instructions for doing a full restore of the phone for users who may have gotten stuck during the update process.

This update had been a precursor to the long-awaited first update to the Windows Phone 7 platform that will bring new features like copy and paste, an improved Marketplace search tool, and faster load times for some games and applications. This update had been sent out to ease the installation of that update package, much like Microsoft does ahead of major service packs for its Windows operating system.

Google rolls out Honeycomb SDK for Android tablets

Google has released the full software development kit for Honeycomb, the tablet-friendly version 3.0 of its Android operating system.

In a blog post on Tuesday, Android SDK tech lead Xavier Ducrohet wrote that the release made it possible for developers to create applications for the new platform and publish them to the Android Market.

Honeycomb looks quite different to other versions of Android, as it is designed for use on larger screens than those present on smartphones. The new SDK makes it easier to manage screen space usage and the kinds of gestures that people will use on tablets such as the Motorola Xoom, which will be the first Honeycomb-bearing tablet to hit the market.

Read more of “Google rolls out Honeycomb SDK for Android tablets” at ZDNet UK.

Major mobile operators close in on NFC

The largest mobile operators in the U.K. and abroad have all agreed to provide services using near-field communications, the technology that powers smart cards and contactless bank cards.

On Monday, Deutsche Telekom, Vodafone, Orange and Telefonica issued a joint statement along with other operators, saying they intended to launch commercial near-field communication (NFC) services for handsets in select markets by 2012. The mobile companies operate the T-Mobile, Vodafone, Orange and O2 brands in the U.K., respectively.

“NFC is perhaps best known for its role in enabling mobile payments, but its applications go far beyond that,” said Franco Bernabe, the chairman of international operator body the GSM Association (GSMA), in the statement. “NFC represents an important innovation opportunity and will facilitate a wide range of interesting services and applications for consumers, such as mobile ticketing, mobile couponing, the exchange of information and content, control access to cars, homes, hotels, offices, car parks and much more.”

Read more of “Major mobile operators close in on NFC” at ZDNet UK.

Intel seeks new MeeGo partner

Intel chief executive Paul Otellini has said the company is looking for a new partner to help develop the MeeGo OS, following Nokia’s switch to Windows Phone 7.

Nokia has not abandoned MeeGo, but its decision to focus on Windows Phone 7 for its smartphones has left question marks over the OS’s future. Intel is not throwing in the towel, however, having recently demonstrated the OS at the Mobile World Congress in Barcelona. “We will find another partner,” Otellini told news wire Reuters in an interview. “The carriers still want a third ecosystem and the carriers want an open ecosystem, and that’s the thing that drives our motivation.

“Some closed models will certainly survive, because you can optimize the experience, but in general, if you harness the ability of all the engineers in the world and the developers in the world, open wins,” Otellini added.

Read more of “Intel looking for new MeeGo partner after Nokia’s move to Windows Phone” at CNET UK.

Sony Ericsson eyes No.1 Android maker label

newsmaker BARCELONA–Sony Ericsson wants to be the No. 1 Google Android handset maker in the world. And it needs a strong foothold in the U.S. market to make that goal a reality, said company CEO Bert Nordberg.

Sony Ericsson, a joint venture between Japanese consumer electronics maker Sony and Swedish telecommunications equipment maker Ericsson, has been on the mobile phone scene for about a decade. The company has mostly concentrated on delivering high-end phones to the European and Asian markets. But it’s never had a strong presence in the United States, which has helped keep its overall market share in the bottom half of major handset providers.

But Sony Ericsson has bigger ambitions. ZDNet Asia’s sister site CNET sat down with Nordberg on the eve of the GSM Association’s Mobile World Congress to hear how the company plans to become the No. 1 Android device maker. Nordberg talked about Sony Ericsson’s highly anticipated Xperia Play, dubbed the Sony Ericsson PlayStation phone.

The phone, which is based on Google’s latest Android software and was introduced Sunday at Sony Ericsson’s press conference, will become its flagship smartphone in the U.S. market. To generate buzz ahead of the launch, Sony Ericsson ran an advertisement during the broadcast of the Super Bowl. And according to Nordberg, it worked. He wouldn’t say how much the company spent on that ad. But he said the CEO of a major U.S. carrier called him directly to ask when his network could get the new phone.

“It was the first time we had a Super Bowl ad,” he said. “But it was money well spent.”

Nordberg also shared some candid opinions about the deal announced last week between rival handset maker Nokia and Microsoft. And he discussed the importance of Sony Ericsson cracking the U.S. carrier market. Below is an edited excerpt of the conversation.

Before we talk about Sony Ericsson’s big news, let’s discuss the newly announced Nokia-Microsoft partnership. Last week, Nokia announced that it will use Microsoft’s Windows Phone 7 operating system as its primary OS. What does this mean for Sony Ericsson?
Well, it’s clear that our focus is on Android. It’s where our focus has been this past year. And we will continue that. In fact, we plan to double the number of Android phones in the market this year. It’s an ongoing journey, but we like our position in the Android ecosystem. And we’ve made big contributions to the open-source software.

We think the Nokia news is quite interesting for others, especially those who have invested in the Windows Phone 7 ecosystem.

But Sony Ericsson has supported the Microsoft mobile platform in the past. Does this mean that you aren’t going to be a Windows Phone 7 supporter?
We are not big supporters of the Microsoft platform. It’s not a big part of our strategy, so it’s not really an issue for me. But for companies that have invested a lot in Windows Phone 7, they have to ask if Nokia will get an advantage that will change the game.

That said, as a European I think it says a lot about where the industry is going. It looks like the last stronghold in Europe in mobile has moved to the West Coast of the U.S. The U.S. is taking over. They are first with LTE. So much of the OS innovation is happening there. It’s obvious that it’s more important to come from the Internet world than from the mobile world. And that is why California is so important.

Nokia is still the world’s largest maker of cell phones. From a competitive standpoint are you still worried about them?
I was worried about them more before their announcement with Microsoft. It’s probably going to work out better for us. They would have had a greater impact on us if they had gone with Android.

Speaking of Android, how can you as a handset maker differentiate your product on Android, when so many of your competitors are also using the software?
That is the trick. We can build beautiful phones that connect to the living room, because we are partly owned by Sony. So we can connect to TVs. We have better screen technology, better cameras. And then our other parent is Ericsson, which owns the network. So we know about changes and features for the fastest speed networks. Ericsson has a very strong network patent portfolio, and we can leverage the ecosystem for those network technologies to get good margins.

So hardware is where you see Sony Ericsson differentiating itself?
Yes, that is where we can offer innovation by merging products and platforms, like the Sony Ericsson PlayStation phone. And we also have big ownership in content: movies, music, and TV programs. So we have a strong relationship there as well.

Upgrades to Android come out so quickly. What is the strategy for supporting all these different versions of software? That must create a bit of a problem in terms of how long you can support a particular phone.
Upgrades in the mobile market have become a lot like the computer industry. The upgrades are coming rapidly. And it really changes the nature of the industry. Mobile phones used to be phones with computers built into them. But now that’s changing. They’re now computing devices with a phone. That’s why so much of the development has gone to the West Coast in the U.S. And it’s why we are working so closely with Google.

One of our competitors has said they will support upgraded software for up to two years and then cut if off. We haven’t set specific timing on this. That’s difficult to do. But because the chipsets get upgraded every three years, it means that after three years some CPUs won’t be able to run the software of today. So I think two years is not too bad a strategy when you are talking about supporting software upgrades.

You just announced the Xperia Play smartphone, which has been dubbed the Sony Ericsson PlayStation phone. It’s one of the first iconic devices from the company to launch in the U.S. And it’s the first device you’re selling on Verizon Wireless. Why the U.S. and why Verizon?
We’ve always launched products in Europe and then the U.S. But we’ve learned that the U.S. won’t take a device unless they’re first. So the strategy has turned around. As I said before, we’re seeing a lot of activity in mobile happening in California now. It’s why we moved our CTO and chief creative team from Europe to the U.S. So I now have two executive teams reporting to me from California. This is not a joke. Operators in the U.S. know we are serious about this market and we’re coming to them.

So why launch with Verizon Wireless first? You’ve offered other Sony Ericsson devices on GSM carriers in the U.S., such as AT&T and T-Mobile USA.
Verizon Wireless is such a big player in the U.S. market, so it’s become very important. And also Verizon is a great company with a good network. It doesn’t mean that they will be alone in offering this device. We’re not big on exclusivity. So I think we should remain open.

Some handset makers have lamented about how difficult it is to get into an American carrier. What’s your take on this?
They (U.S. operators) have 23,000 different things you have to do to be allowed on their networks. So it’s damn difficult to get in there. There is a lot of coding and special adaptation that needs to be done. And they only accept very good phones in the network. But once you get in, the investment is done. So we hope that is step one.

As you’ve stated, it’s not easy to break into a U.S. carrier. So how did you do it with Verizon?
One of our parent companies is Ericsson, and that’s how we got in. Ericsson sells LTE gear to Verizon. And Ericsson also bought some networking businesses from Nortel, which also sold to Verizon. So we could build a relationship from that. Then we started to show them the phones. And they loved the Xperia Play.

Some people say that CDMA is a dying technology. And Nokia has chosen to essentially ignore the CDMA market. Once LTE is deployed, there won’t be the need for CDMA or even older generations of GSM technology. But with the Sony Xperia Play, you are expanding your CDMA product portfolio to support devices on Verizon. How important is it for you to support CDMA, especially in the U.S.?
All CDMA customers will evolve into LTE customers. HSPA customers will also become LTE customers. And then the technologies will merge. But that hasn’t happened yet. And it will take some time. So we could wait and introduce LTE devices. But why would we? Some U.S. carriers are still dependent on the CDMA technology. We want to work with them now as they are in transition. There is a big race to 4G. And we are well-placed because Ericsson is building these LTE networks. So I expect we will have an advantage in that.

The smartphone market is so competitive these days. And Sony Ericsson is not in the top three of handset makers worldwide. What is your goal for the company going forward? Do you hope to be one of the top handset makers?
We want to be No. 1 on Google Android.

Do you mean No. 1 on Android in the world or in the U.S.?
Yes, in the world. Last year, in nine months, we took 14 percent market share in Android worldwide. And we only had four devices. It could have been better. But I’d say that’s not a bad start. We are definitely the No. 1 Android player in Western Europe. But we can’t be No. 1 in the world without the U.S. We need to get into the U.S. market. And we think we need 25 percent of the market to be No. 1 in the world. We are already No. 1 in Japan and Sweden.

Motorola already has a strong Android brand in the U.S., particularly on Verizon’s network. You will now also offer some Android phones on Verizon. How much of a threat is Motorola to your plan to be No. 1 in Android worldwide?
Motorola has a similar strategy with Android that we have. In the U.S. they are very strong. But the difference between us and them is that over 70 percent of their business is in the U.S. Right now, we are limited in the U.S. So we can only do better in the U.S. Motorola is strong where we are weak, and we are strong where they are weak.

Verizon Wireless is launching a lot of very cool new phones this spring. It just launched the Apple iPhone. Neither Apple nor Verizon have released sales figures yet, but Verizon has said that presales of the device were stronger than in previous device launches. How will the Sony Ericsson Xperia Play compete against the Apple iPhone?
I think our phone addresses a different segment of the market. I expect the iPhone will do well. But we will be targeting different customers. We offer a different proposition. This is a gaming and entertainment device. I’d show how some of the games work, but honestly, it’s targeted to a much younger consumer. Besides I have three daughters. And unfortunately they were into horses much more than they were into games.

Service providers need to look ahead

BARCELONA–Service providers need to invest in technologies to bring them into the future even if it is not obvious now that these will help them win the race, urge a panel of speakers who identify mobile Internet as a high growth area.

During his keynote at the Mobile World Congress 2011 here Wednesday, Cisco Systems Chairman and CEO John Chambers said service providers need to look forward and place their bets on technologies relevant for the future, even though its advantages might not be obvious now.

“You have to be willing to place your investments [on technologies] three to five years before they are obvious,” he said. “You have to be willing to ride through short-term criticisms and not be distracted by where you are taking your company.”

Chambers believes the future will be dominated by mobile Internet and video.

“People used to talk about these as separate categories. In my opinion, these will be the characteristics of all fundamental innovation and business change for the next ten years,” he said.

Masayoshi Son, chairman and CEO of Japanese telecommunications company, Softbank, pointed to his own organization as an example to underscore the importance of staying ahead of the curve. He described the company’s 2006 acquisition of Vodafone Japan as a “crazy bet” at that time because the US$20 billion deal was transacted in cash and used mostly to pay off debts.

Moreover, Softbank was losing US$1 billion a year, brought on by the dot.com bust at the turn of the millennium and its share price dipped 60 percent following the announcement of the acquisition.

The bet, however, paid off, Son said, noting that Softbank managed to increased in value despite the telecom market’s flat revenue growth and increasing CAPEX (capital expenditure). This was driven by the growth of its market share as well as the increase of total ARPU (average revenue per user), he said.

The company’s gamble on data services also played a role in boosting ARPU, which helped to offset the drop in ARPU for voice services, he added.

Today, all Softbank customers are 3G subscribers compared to the world average of 22 percent, and 85 percent of new subscribers are smartphone users, he said. The mobile operator is Japan’s third-largest.

Son projected that data traffic increased 1,200 times per user in the past 10 years and this is set to grow even more, particularly as content such as video become richer. This is the reason why mobile Internet will continue to be a big bet for the company, he said.

Liau Yun Qing of ZDNet Asia reported from Mobile World Congress 2011 in Barcelona, Spain.

RIM and Nokia: Carrier-friendly smartphone alternatives

BARCELONA, Spain–Research In Motion and Nokia share a similar vision for success: help wireless carriers avoid becoming a dumb pipe.

RIM co-CEO Jim Balsillie and Nokia CEO Stephen Elop shared the stage here Wednesday at the Mobile World Congress as part of a keynote panel. Competition is heating up between the two handset makers after Nokia’s announcement last week that it will team up with software maker Microsoft.

Since the announcement last Friday, Elop has been calling the Nokia-Microsoft pairing the “third horse” in what today is shaping up to be a two-horse race in the mobile industry between the Apple iOS and Google Android platforms. While Nokia and RIM still rank No. 1 and No. 2, respectively, in terms of worldwide smartphone sales, their market share has been giving ground to the Apple and Google platforms.

But where Apple and Google are often seen as a threat to wireless operators because they offer value-added services, such as music, navigation, and even language translation, RIM’s Balsillie said he wants to help wireless operators extract value from their networks. And Nokia’s Elop agreed.

“The tricky dilemma is that there are 900 different carriers,” Balsillie said. “How do you enable these different carriers so that they are not hijacked [by someone else’s services]?”

Balsillie said he sees RIM first and foremost as a hardware and e-mail service provider, offering the most network-efficient push e-mail service on the market. He claims that RIM’s BlackBerry devices consume about half the network resources that similar products from competitors consume. The company also provides an added layer of security to its services that make it less vulnerable to attacks.

One of the important aspects of RIM’s app store, Balsillie said, is the fact that it allows carrier billing for apps as well as within apps. This not only provides a more convenient way for customers to purchase apps or services within apps, but it also allows the carrier to extract some value from the transaction as well.

“We are not an app company,” he said. “What we want to do is plug into what the carriers are already doing.”

Elop said that when carriers talk about Apple and Google there is a sense that they are enabling services thorugh which profits are going in another direction. He said that it’s important for the “third ecosystem” in mobile to help carriers retain a lucrative stake.

“The philosophy of this third ecosystem and what Nokia has done for many years is to find a balance with carriers,” he said. “There needs to be an operator-friendly player. And we aim to be the most operator-friendly platform out there.”

Carriers around the world are embracing devices running iOS software and Android, mostly because these are the devices and services that consumers want. But there is a real fear among wireless operators that the services and capabilities developed as part of these platforms will make the carrier itself irrelevant. It will be Google and Apple that offer all the value to consumers via applications and app store services, while the carrier will only provide basic connectivity. In other words, carriers will become a mere conduit.

“What is most important is how we can avoid being reduced to a ‘dumb pipe’,” said Ryuji Yamada, CEO of Japan’s NTT DoCoMo, who also participated in the keynote panel Wednesday. “We are susceptible more than ever to becoming this dumb pipe because of smartphones. And we are determined to avoid it by all means.”

China Mobile CEO Wang Jianzhou in his keynote presentation Tuesday expressed similar sentiments and advised carriers to continue innovating to avoid falling into this trap.

But some providers say that it’s too late.

“Mobile carriers are becoming dumb pipes,” Masayoshi Son, CEO of SoftBank, said during a keynote session earlier. “That’s the depressing reality.”

Indeed, NTT’s Yamada described a service his company could offer that provides automatic translation for people speaking different languages. For example, a Japanese person could talk to his friend who speaks Spanish by using an NTT service.

But Google is already offering this exact service. In fact, the Web powerhouse showcased the Google Translation application at Mobile World Congress a year ago. Yamada acknowledged that the battle to stay relevant will not be easy. But he said it’s a battle that carriers must win.

“Theoretically, we could offer [this translation] service as part of a carrier cloud service or through a third party,” he said.

“It’s a race between the camps,” he continued. “But as a network operator, we are in the best position to know what the network is capable of. And we are determined not to lose this race.”

Nokia’s Microsoft deal leads to shareholder revolt

Were the champagne celebrations of a Nokia-Microsoft partnership premature?

An unnamed “group of nine young Nokia shareholders” who have also been employees released an open letter on Tuesday to the company’s other shareholders and institutional investors that, in a nutshell, said that the Microsoft deal is a bad one for Nokia and that CEO Stephen Elop should be replaced. (Techmeme)

In the letter, the group said it plans to challenge the Microsoft partnership and strategy at the company’s Annual General Meeting for Shareholders on May 3. It said that it has also developed a “Plan B” approach that involves not only replacing Elop but also looks to revamp the company’s hiring strategy and eliminate “outdated and bureaucratic R&D practices.”

Read more of “Nokia’s Microsoft deal leads to shareholder revolt, call for a “Plan B”” at ZDNet.

An iPhone with slide-out keyboard?

Would Apple really consider a slide-out keyboard for its next-generation iPhone?

So goes the latest rumor. A Taiwanese blog, Apple.pro, says it has its hands on information pointing to three different models being considered for final production as the iPhone 5, expected to be released this summer (here’s a Google Translate link).

One has a physical keyboard that slides out, and another is said to be like an iPhone 4 in styling but with a longer-lasting battery and a better camera. The upgrade from an iPhone 4 to that model of iPhone 5, according to the report, would be similar to the modest improvements from iPhone 3G to iPhone 3GS.

Obviously the report is to be taken with a grain of salt or two, but the site has gotten some reliable leaks in the past. It’s been wrong too, according to Apple Insider.

Steve Jobs has expressed his distaste for physical cell phone keyboards in the past. When the original iPhone was introduced in January 2007, Jobs told the MacWorld audience that Apple chose to use a multitouch virtual keyboard in lieu of a physical one, in part because once a keyboard is put on a mobile phone, it’s there forever and hard to change the buttons to work with different applications.

Not that Jobs has never changed his mind before. But Apple is also carrying the banner for all things touch-related, which likely extends to iPhone keyboards for the foreseeable future.

Ericsson bets on mobile broadband, cloud

BARCELONA–Ericsson is looking at mobile broadband and cloud services to drive its efforts toward a “networked society” and announces a partnership with content delivery provider, Akamai, to push content to mobile devices.

During his keynote speech at the Mobile World Congress tradeshow here Monday, Ericsson President and CEO Hans Vestberg promoted the concept of a networked society, in which “anything that can be benefited by a network will be connected”. In fact, the networking equipment vendor last year predicted that by 2020, the world will have 50 billion connected devices, he said.

According to Vestberg, the three factors that will bring this vision to fruit are mobility, broadband and cloud.

He noted that the number of mobile subscribers is expected to balloon from 5.3 billion at the end of 2010 to reach 7 to 8 billion in 2015, adding that this does not include machine-to-machine adoption.

For operators, broadband has become one of the most important revenue growth areas, he said, adding that mobile broadband adoption is growing so fast that, by 2015, network traffic passing through smart devices is expected to equal that of PC.

Mobile broadband will have a huge impact on society as it is able to reach more people, said Vestberg. He added that among the 500 million smart devices in the world, about 50 percent of overall traffic pass through Ericsson’s networks.

To boost its capability to provide the right content to the right smart device at the right time, the company today signed an exclusive partnership with content delivery company, Akamai. The deal will leverage Ericsson’s experience in provisioning data in networks as well as Akamai’s relationship with content providers, to more efficiently deliver content to mobile consumers, said Vestberg.

Looking to the cloud
Ericsson is also looking to ride the cloud bandwagon and has been providing a range of cloud offerings such as hosted applications and services.

According to Vestberg, the company last year invested in India-based Novatium, which provides PC-as-a-service technology, and currently offers a PC-on-the-cloud service–targeted at operators–that will enable service providers to create new profit avenues from their existing network infrastructure.

At the company’s exhibition booth, Novatium CTO Vinod Kumar Gopinath explained that its service differs from the competition because its provision spans from device to connectivity. Companies and individuals do not need to worry about the hardware specification, software, broadband connection or maintenance, he told ZDNet Asia.

The service was launched commercially two years ago and currently has about 40,000 users in India, said Gopinath. Users purchase the devices, priced from US$140, and pay about US$3 per month to use the service, he said.

Liau Yun Qing of ZDNet Asia reported from Mobile World Congress 2011 in Barcelona, Spain.

LG cautious over Nokia-Microsoft deal

LG has reacted tentatively to Microsoft’s new partnership with Nokia, which will give the Finnish handset maker much deeper input into Windows Phone’s development than that allowed to other companies using the platform in their devices.

At an LG press conference on Monday at Mobile World Congress in Barcelona, company business strategy chief Yong-seok Jang told ZDNet Asia’s sister site ZDNet UK that there “must be a strategy rationale” for the partnership announced on Friday.

The deal will see Nokia abandon MeeGo as its chosen platform for high-end phones, but will give Nokia more standing in the Windows Phone ecosystem than LG, Dell, HTC and Samsung.

Read more of “LG cautious over Nokia-Microsoft Windows Phone deal” at ZDNet UK.

Mobile operators not liable for forced shutdown

Neither mobile operators nor users are entitled to legal recourse when service providers are forced to shut down or disrupt services by authorities in the markets they operate in, according to lawyers.

The scenario played out in the recent protests in Egypt against now-ousted President Hosni Mubarak. Mobile network operators in the country were ordered by the government to shut down all their network services on Jan. 28, according to the Wall Street Journal.

Two foreign-owned telcos, Vodafone of the United Kingdom and France Telecom, also claimed the authorities forcibly used their text messaging networks to send out pro-government and army-endorsed SMSes to their citizen subscribers, a separate report by the Journal stated. Vodafone said the Egyptian government utilized the emergency powers provisions of the Telecoms Act to send out the messages.

Rajesh Sreenivasan, head of technology, media and telecoms practice at Rajah & Tann Singapore, told ZDNet Asia in a phone interview that telcos are able to operate in a particular jurisdiction because they are issued a license by the government. Because of that, they will have to comply with the terms of that license; if the telco chooses not to comply, it could face “the wrath of regulation breach”, he pointed out.

The same exclusion of operator liability via a license clause can also extend to the sending of pro-government text-messages to citizen subscribers, added Sreenivasan. When a government invokes emergency powers, it covers a broad spectrum of what they can do, from shutting down places to imposing curfews; hence, it is a “non-issue” for telcos to comply with the authorities’ requests, he explained.

That same power, he said, is used to issue tsunami warnings, for example, because SMS is the easiest way to get the message across.

Bryan Tan, director of Keystone Law, held a similar view. “Under normal circumstances, the government would have covered themselves with the ability to order the shutdown of services for national interests.

“This would be covered by legislation or under the licenses granted to the mobile network operators,” he said in an e-mail.

No need for operators to claim damages
Rajah & Tann’s Sreenivasan also pointed out that in the case of Egypt, it would be “unnecessary” for a telco to claim for damages as losses have curbed due to the restoration of most services. Revenue from text messages, he added, is not as high as voice and data.

A statement from Vodafone indicated that the operator’s services for voice and data were restored on Jan. 29 and Feb. 3, respectively.

At press time however, there were no updates on the restoration of text messaging services, even though the end to the hostilities came into sight on Friday, when the country’s leader decided to end his 30-year reign.

If there were no clauses in the telco’s license and the emergency powers are not wide enough to cover the activities the government carries out, there is ground for telcos to claim that they were obligated to carry out actions that caused them to suffer losses, Sreenivasan added.

Mobile users can’t sue
Similarly, it is common for telcos to have a general exclusion of liability in the event of a government request to suspend their services, according to Sreenivasan.

Keystone Law’s Tan noted that the network operators would themselves be covered by the service contract in the event of a shutdown due to government orders. “As the [telcos] really don’t have a choice or discretion, mobile subscribers may have little recourse.”

There is typically a provision in the service contract that if an operator cannot fulfill their service agreement because of a government order or a force majeure, natural disasters such as floods for example, they would not be held liable, Tan said.

Asked about managing customer relations in the event of an authority-backed service shutdown, Ivan Lim, deputy director of corporate communications and investor relations at M1, said in an e-mail statement that the telco will focus on minimizing subscriber anxiety by keeping their customers notified of the latest developments.

“Should the situation arise where the authority informs of the shutdown of network operations for the sake of national interests and security, we will ensure that our customers are consistently and adequately provided with up-to-date information on the [situation, and] supported with a readied business continuity plan,” he said.

Nokia: Windows Phone 7 to be market challenger

BARCELONA–The Nokia-Microsoft partnership will make Windows Phone 7 a third challenger in the current mobile operating system market, says Nokia CEO, who adds that the decision is welcomed by telcos as it will give users choice.

In a press briefing here Sunday evening, Nokia CEO Stephen Elop acknowledged that both Microsoft and Google had courted the Finnish company to ink a partnership, before the phonemaker chose the Windows Phone 7 platform instead of Google Android.

The collaboration will place Windows Phone 7 a strong third challenger in the smartphone market currently dominated by Apple iOS and Android, said Elop.

Citing his discussions with telcos, he said the decision to create another challenger in the market is well received by mobile operators as it will bring more handsets into the market and offer consumers more choice.

If Nokia had decided to go with Android, the collaboration could make the Google OS a “monopoly” due to the platform’s market share and Nokia’s strong footprint in the smartphone market, he said.

Elop clarified that the partnership does not make Nokia an OEM (original equipment manufacturer). Instead, the smartphone maker will contribute a variety of services such as the Ovi Store and location-based functionality to the Windows mobile platform which can be deployed by other Windows Phone 7 handset manufacturers.

He added that Microsoft will bring its Bing search engine, mobile ads and Xbox integration to Nokia’s handsets. The value transfer to Nokia is estimated to be “in the billions” of dollars, he said.

The Finnish company is currently working on new concepts of Windows Phone 7 handsets, revealed Elop but did not give a specific launch date for these devices, saying that the company wants to first ensure the products’ commercial viability.

Asked if he sees Research in Motion’s enterprise-targeted BlackBerry as a competitor, Elop said the Nokia-Microsoft partnership will be a strong rival to the Canadian phonemaker due to the relationship with the Microsoft Office creator and Nokia’s experience in Symbian and E-series phones.

During the media briefing, the CEO also touched on Nokia’s efforts in regaining its footprint in the smartphone market, noting that the company is working on the low-end segment of the market. He said the company will be bringing “fresh” features to these handsets as well as country-targeted efforts such as dual-SIM phones for markets such as India.

“Bold decision” but right
In a research note Monday on the Nokia-Microsoft partnership, Ovum’s principal analyst Tony Cripps noted that there were limited short-term options available for the Finnish company to catch up with the growth of iOS and Android. In particular, the Google mobile platform had looked set to bypass Nokia in terms of smartphone shipments, Cripps said.

“This is a bold decision by Nokia but absolutely the right one, both for itself and for Microsoft given the drastically changed landscape for smartphones in the past couple of years,” the analyst said.

Adam Leach, also a principal analyst at Ovum, said in the same report: “It’s ironic that the sole purpose of Symbian was to stop Microsoft from repeating its domination of the PC market in handsets.

“Nokia now has the opportunity to cast itself in the role that Intel has taken in the Windows PC market as a mutually beneficial, symbiotic marriage between equals rather than as simply a box-shifter.”

Leach, however, noted that there are still potential risks that Nokia could become “merely a vehicle” for Microsoft and its services should the Finnish company fail to differentiate itself from other Windows Phone 7 makers such as HTC, Samsung and LG.

Ovum’ analyst Nick Dillon added: “For Microsoft, this is nothing less than a coup and the shot in the arm its new Windows Phone 7 platform needed, which despite winning acclaim for its innovative design and user experience has so far failed to set the market alight in terms of sales.”

Liau Yun Qing of ZDNet Asia reported from the sidelines of the Mobile World Congress in Barcelona, Spain.

Nokia, Microsoft becoming Windows Phone bedfellows

Microsoft and Nokia announced a broad mobile phone partnership on Friday that joins two powerful but lagging companies into mutually reliant allies in the mobile phone market.

As expected, Nokia plans to use Microsoft’s Windows Phone 7 operating system as part of a plan to recover from competitive failings detailed in Nokia Chief Executive Stephen Elop’s “burning platform” memo.

But it’s deeper than just an agreement to install the OS on Nokia’s phones. Instead, the companies call it an attempt to build a “third ecosystem”, acknowledging that competing with Apple’s iOS and Google’s Android involves a partnership that must encompass phones, developers, mobile services, partnerships with carriers, and app stores to distribute software.

“There are other mobile ecosystems. We will disrupt them. There will be challenges. We will overcome them. Success requires speed. We will be swift,” Elop and Microsoft CEO Steve Ballmer said in a boldly worded open letter. “Together, we see the opportunity, and we have the will, the resources and the drive to succeed.”

The companies will cooperate tightly under an agreement the companies just describe so far as proposed, not final. Under the deal, Windows Phone 7 would become Nokia’s “principal” operating system, and Nokia would help Microsoft develop it and ensure a broad range of phones using it are available globally.

Nokia will use many Microsoft online services, many of which trail Google rivals, such as Bing for search and maps and AdCenter for advertisements.

When it comes to the sales part of the ecosystem, each company brings something to the deal. Microsoft phones will be able to link up with Nokia’s agreements for carrier billing–a popular option in parts of the world where credit cards are less common. And Nokia will fold its own app store into the Microsoft Marketplace.

It’s not immediately clear what needs to be done to make the deal final; details “specific details of the deal are being worked out,” the companies said.

Nokia, once the dominant power of the mobile phone industry, has ceded the smartphone initiative to Apple’s iPhone and Google’s Android, and Elop believes Nokia’s own Symbian and MeeGo operating systems aren’t competitive. Microsoft has tried for years to penetrate the mobile phone market, and although it now has a credible option with Windows Phone 7, it trails Android when it comes to developer interest and the breadth of phones available.

The two companies can expect their combined might will be more convincing for software authors debating whether they need to bring their apps to yet another ecosystem. But it’s not yet clear how the alliance will extend to another hot new market, tablets, where Microsoft prefers Windows instead of the Windows Phone operating system. In contrast, iOS and Android developers enjoy the same mobile operating system on phones and tablets.

Elop is set to detail the proposal later today at an analyst meeting in London that will be publicly Webcast. The news also arrives immediately before the vast Mobile World Congress trade show in Barcelona, Spain, where a large number of new Android phones and tablets can be expected.

It’s uncertain what effect the alliance will have. Microsoft has had strong operating system partnerships with multiple competing PC makers, but the Nokia alliance, with mutually developed products and shared road maps, appears much deeper than the average relationship Microsoft has with hardware makers. That could encourage those who’ve made strong Android commitments–HTC, Motorola, Sony Ericsson, LG Electronics, Samsung, and more–to double down. After all, they’re all enjoying a period of relative freedom with Nokia in its present relatively uncompetitive state, and strongly pushing Windows Phone products arguably would be abetting the enemy.

The announcement was accompanied by a YouTube video featuring Microsoft and Nokia’s chief executives praising the deal.

“Today, Nokia and Microsoft intend to enter into a strategic alliance,” Elop said in the video, a precursor of a turnaround plan he’s set to detail later today at an analyst conference in London. “Together, we will bring consumers a new mobile experience, with stellar hardware, innovative software, and great services. We will create opportunities beyond anything that currently exists.”

Ballmer said the partnership “brings the brands mobile consumers want, like Bing, Office, and of course Xbox Live.”

Lack of IPv6 mobiles not worrying

Mobile devices that support only IPv4 could pose problems for users in future, but analysts say current dearth of IPv6-enabled smart devices in the market is not cause for worry yet.

In a phone interview with ZDNet Asia, Craig Skinner, senior consultant at Ovum, said apart from “a handful of Nokia devices”, not many mobile phones are able to handle IPv6 (Internet Protocol version 6) through 3G connection. However, some companies such as Apple with its iPhone and iPad devices, as well as HTC, enable IPv6 connection over the Wi-Fi interface, he noted.

Marc Einstein, Frost & Sullivan’s Asia-Pacific industry manager for ICT practice, concurred, noting that a vast majority of smartphones in the market are IPv4-only devices.

Phones that are not IPv6-compliant can become a problem for users, according to Einstein. He predicted a “disturbing” time in the future when owners of IPv4-only phones are not able to access IPv6-only addresses.

Despite this, users planning to get a new device should not be deterred by the lack of IPv6-compatible devices as IPv4 addresses have “not fully run out” yet, he pointed out.

On Feb. 1, the Internet Assigned Numbers Authority (IANA) allotted the last two on-demand lots of IPv4 addresses to the Asia-Pacific Network Information Center (APNIC). Subsequently, IANA also distributed the last five lots of IPv4 addresses to the five regional Internet registries (RIR).

Ovum’s Skinner added that service providers are “not shutting down” IPv4 and will run both versions concurrently.

A Nokia spokesperson echoed the analysts’ views that users should not worry if their phone does not support IPv6. “Based on the present design principle, almost all the existing services on the Internet will remain reachable for IPv4-only phone users for the foreseeable future,” he said in an e-mail interview.

By the time users upgrade their phones to newer models, they will “switch seamlessly” to IPv6, he added.

‘Chicken and egg’ problem
Skinner described the lack of IPv6-compliant phones as a “chicken and egg problem”. He noted that phone makers did not include cater for IPv6 in the devices because of the lack of such networks. On the other hand, network service providers saw no need to deploy IPv6 as there were no handsets in the market demanding the protocol, he added.

The situation, however, will start changing. Skinner said, pointing to U.S. carrier Verizon Wireless which included IPv6 support as a criteria for devices to work on its LTE (Long Term Evolution) network.

Web-connected devices to boost IPv6 uptake
Aside from network provider mandate, Skinner noted that the mobile device usage will also be a driver of IPv6 adoption. Traditionally, service providers “extended the use of IPv4” by reusing and sharing network IP addresses to communicate with devices. Increasingly, with smartphones and laptops connected to the Internet–and hence IP addresses–for a longer period, there may be “congestion” if there are not enough IP addresses, he said.

According to him, IPv6 will only affect mobile app developers “a little” as many apps are agnostic to the two protocols. However, he cautioned that older mobile applications may have code specific to IPv4 and hence are unable to handle the longer IPv6 addresses.

To work around that, app developers should ensure they work with the right set of API, Skinner said, adding that mobile operating system providers have updated their application programming interface.

S’pore telcos see value in Mi-Fi handsets

Mi-Fi-enabled handsets are starting to gain traction in the market, but rather than see them as a threat to their mobile broadband business, two Singapore-based carriers believe such devices can boost mobile data traffic.

Ivan Lim, deputy director of corporate communications and investor relations at M1, said Mi-Fi support on mobile handsets will provide consumers an “added alternative” to access wireless broadband via their mobile devices.

Ng Long Shyang, head of marketing and sales at StarHub, agreed and added that the carrier has no plans to disable handsets with Mi-Fi capabilities.

He explained that from a business perspective, growing mobile data traffic and in turn, revenues, are “important considerations” for operators and it does not make sense to clamp down on mobile handsets because they help drive mobile data usage among consumers.

With Mi-Fi devices, users can create mobile hotspots that allow multiple devices to connect to a 3G cellular Internet service–also called tethering. Some smartphones are also equipped with Mi-Fi capabilities, including those powered by Google’s Android 2.2, known as Froyo such as Dell’s Streak and HTC’s Desire devices.

Apple is also reportedly looking to include Mi-Fi support in its next iOS 4.3 software update. According to technology Web site, Ars Technica, iPhones sold by U.S.-based Verizon Wireless already come with a mobile hotspot feature which will be rolled out to all compatible handsets in the upcoming OS update.

In a previous ZDNet Asia report, Springboard Research analyst Bryan Wang said some telcos may ban smartphone tethering and encourage consumers to buy multiple data SIM cards for every device they want to Web-enable.

Mi-Fi solves broadband congestion?
Revenues aside, Mi-Fi-enabled handsets can also help alleviate 3G broadband traffic congestion.

Nitin Bhat, partner at research house Frost & Sullivan Asia-Pacific, had earlier predicted that Mi-Fi devices such as smartphones will have a “robust business case” as their ability to offload data traffic will ease the strain on existing 3G networks. Bhat added that consumers can do without multiple data plans and SIM cards with Mi-Fi, utilizing one plan for multiple devices instead.

Lim agreed, noting that because Mi-Fi supports multiple users or devices on one network source, the network operator will only identify the primary user accessing the network and not its accompanying users.

That said, he acknowledged that this method of easing wireless broadband congestion is not ideal. “Sharing of data among several devices or parties will subsequently lead to a lag in connectivity as oppose to the connection quality of one dedicated source. The user experience will thus be affected,” he explained.

Ng, however, did not believe Mi-Fi-enabled handsets would alleviate 3G broadband traffic, given that such devices would still tap on the existing broadband infrastructure to support multiple devices.

Ovum’s senior analyst, Nicole McCormick, shared his sentiments. She said in her e-mail that Mi-Fi-enabled handsets and devices will likely increase the amount of traffic on 3G broadband connections. This, though, will generate additional revenue opportunities for carriers, McCormick said.

The analyst instead pointed to femtocells as a better solution to alleviate network congestion. She said femtocells, which provide a local mobile 3G hotspot with fixed network backhaul, would be more attractive to operators looking to address rising demand for bandwidth.

Industry insiders, however, said in an earlier ZDNet Asia report that femtocells lacked a compelling business case which is hindering mass adoption of the device.

As femtocells are managed by users, carriers have no way of ensuring its wireless coverage quality can be adequately maintained from the consumer end. Furthermore, the lack of operator buy-in means the device remains pricey, which is another barrier for adoption.

Meanwhile, operators are already looking at other options to improve mobile broadband coverage quality.

StarHub, for instance, upgraded its network on two levels, Ng revealed. First, it implemented HSPA+ dual carriage technology, which could potentially double mobile broadband speeds to 42.2 Mbps, he said. Second, the carrier is working with Huawei Technologies on a smartphone signaling offering that optimizes the way handsets communicate with the network.

Ng said: “This signaling technology effectively halves redundant signaling loads, hence improving mobile broadband connectivity and overall smartphone performance.” He added that StarHub is looking at long-term evolution (LTE) in its next phase of mobile network development projects.

M1 is also looking to LTE to improve its mobile broadband business in the future.

Lim said: “The adoption and upgrade of our network to LTE is an area that we’ll be placing much focus on as we anticipate a strong growth in mobile data, and LTE would be an efficient mode in supporting this growth.”

Social media most evolved in S’pore

SINGAPORE–The city-state is among the world’s most evolved social media markets and its people’s national pastime, shopping, is clearly reflected in their online habits, according to a research conducted by Firefly Millward Brown.

Released during a press briefing here Thursday, the survey findings revealed that Singaporeans’ lives converge online and offline, where their families, friends, interests, work and hobbies could be found in the tangible as well as virtual world.

Nichola Rastrick, managing director of the research firm, said: “For example, if they can see branded products in a shop, they expect to also find them in an online environment.” This was unlike other countries in the region, where Internet users relied on social media more for communication, she said.

Covering 15 countries including Singapore, China, India and the United States, the qualitative survey was developed based on the observations of 32 selected bloggers in each country, according to the Firefly.

Christoper Madison, the company’s regional director of digital strategy, said Singapore’s evolved social landscape is due to the fact that its citizens are brand-savvy and genuinely want to be associated with fashion brands even in the digital world.

“The things that they do in [Singapore’s shopping strip] Orchard Road, can be very similar to what they are doing online, such as to find out more about discounts and events offered by the popular brands,” said Madison.

Hence, he noted that companies and marketers are also more proactive in making their online presence felt by engaging consumers through Facebook and other social media platforms, in the form of viral videos and regular news updates.

Besides shopping, food blogs and banks were also some of the more popular “encounters”, or mentions, in Singapore’s social media scene, according to the survey.

It added that easy and cheap access to the Internet, as well as the comfort level with going online, are some of the reasons why social media is more pervasive here.

While the study showed that the experience and behavior of social media users did not vary too much among the 15 countries surveyed, the “shopping association” was less obvious in Thailand and Indonesia.

Firefly’s findings revealed the Thais used social media to create a sense of community, and much of the online conversation revolved around expressions of friendship and connectedness.

Indonesians, however, regarded social media as a way to establish social status, success and as a platform for self-promotion.

Rastrick added that mobile penetration rate is extremely high in Indonesia, and with the constant traffic jams, platforms that provide brief and quick means of communicating such as Twitter are gaining popularity.

And while Facebook might not be readily available in China, the country’s online citizens were still active participants on social media networks, turning instead to local platforms such as Renren for online conversations, according to Firefly.

However, due to the restriction of Facebook, Chinese social media users felt left out of global dialogues, the survey found.

Businesses still figuring out social reach
But while social media may be the rage now, companies and marketers are still grasping to find the right way to reach out to consumers, according to Firefly.

Rastrick explained that the survey findings clearly showed that consumers did not want social platforms to turn into an avenue to hawk goods and services. Instead, they wanted marketers to engage them in dialogues, she said.

She warned that the biggest mistake marketers can make is to treat social media networks as a “marketplace”.

Madison added that businesses should cultivate a two-way conversation with the online community and establish a proper social media team to run effective campaigns.

“It’s easier to get on than [keep a campaign going]… Once you start something in the social media space, it is a commitment,” he said.

Using Singapore as an example, Madison said consumers are savvy and know what they want, and companies should invest in the social media space to respond to this market.

He also identified some rules for social media engagement, such as being selective about the platforms and using tactics to motivate the influencers and social media “stars”, or high-profile social personalities. This can be achieved by having good knowledge of the local market, he added.

Other rules include paying attention to small details, allowing negative comments so that consumers can make informed decisions, and building social media credentials through “humanization” of the brand, he suggested.

Nokia prepares for major shake-up

Nokia’s CEO Stephen Elop is reportedly preparing for a major shake-up at the company as he searches for a way to save the once mighty cell phone brand.

Elop is expected to unveil a new strategy for turning around the company at its investors’ conference in London on Friday. Nokia has been slipping in terms of market share the last several quarters as it faces stiff competition at the high end of the market from Apple’s iPhone as well as phones running Google’s Android platform. And at the low end, the company is also facing competition from Chinese manufacturers.

News outlets are already reporting bits and pieces of the new strategy supposedly leaked from insiders. Reuters said Wednesday that unnamed sources at the company confirmed that Nokia has halted development of its new high-end mobile operating system, Meego. And The Register in the U.K. said in its story that “well-placed sources” inside of the company told it that Nokia is considering moving its headquarters from Espoo, Finland, to Silicon Valley.

And Nokia this week may announce that it is adopting an operating system from one of its rivals, either Microsoft’s Windows Phone 7 or Google’s Android, according to The Wall Street Journal. The Journal said today that Nokia is in talks with Microsoft about making use of Windows Phone 7, along with its own Symbian software. Before joining Nokia last fall, Elop was a top executive at Microsoft.

Nokia representatives declined to comment.

Elop, who hinted at sweeping new changes during the company’s most recent earnings call with investors, wrote a scathing internal memo that was leaked to The Wall Street Journal and Engadget this week.

In that memo, he said the company has lost its competitive edge to competitors Apple and Google. Apple’s iPhone has dominated the smartphone market for the past couple of years, and Google’s Android operating system has quickly picked up momentum as Nokia’s traditional handset competitors adopt the free, open-source platform.

In the memo, he noted that the company’s own two operating systems– Symbian and Meego–may not be enough to combat rivals. The traditional Symbian OS is unwieldy, and the Meego effort, announced almost a year ago for high-end devices, is woefully late.

“We thought MeeGo would be a platform for winning high-end smartphones,” he said in the memo. “However, at this rate, by the end of 2011, we might have only one MeeGo product in the market.”

The company has already started to cancel product launches in the U.S. Last month is was reported that the company canceled the upcoming U.S. release of a new smartphone, the X7, which was supposed to be exclusive to AT&T. The company also supposedly canceled the launch of another device on T-Mobile USA’s network.

As for possible plans to virtually relocate the company’s headquarters? It’s not entirely unlikely. The board of directors made a bold move in putting Elop in charge. He is the first non-Finnish CEO in the company’s 150-year history.

Nokia moved into its current headquarters in Espoo in the 1980s, The Register said. If the company moved headquarters to the U.S., it likely wouldn’t affect the company’s main development facility in Finland.

Other executives have also taken bold moves to change the company throughout its history. The Register noted that the late CEO Kari Kairamo ushered Nokia into the high-tech age with numerous acquisitions in the 1980s. And Jorma Olilla shed many of the company’s legacy industrial businesses. Later, he ditched Nokia’s consumer electronics and computing products.

While Nokia’s presence in the U.S. today is minimal, the company did have a major facility in Irving, Texas, for several years. In an effort to regain market share in the U.S., the company opened a new office in Sunnyvale, Calif., in December, which could serve as the new headquarters.

New phone to feature Android plus Facebook

British start-up INQ Mobile will be releasing a new phone that spices up the Android operating system with tight Facebook integration. Among the features of the new phone, called the INQ Cloud Touch, are four Facebook-related buttons on the home screen, Facebook friends integrated with contacts, and a prominently featured real-time News Feed of Facebook activity.

According to a demo video taped by TechCrunch, the phone is intended to be a mid-level device geared toward teenagers, meaning that it could be available for a rather low price–perhaps as low as US$50–when purchased with a contract. The Cloud Touch will also be available overseas before it hits the United States market.

Rumors of a “Facebook phone” circulated last fall, causing some to believe that Facebook would developing, branding, and selling a device in the manner of Google’s Nexus One, which was ultimately a failure. Facebook repeatedly denied that it was building a phone, but executives have said that the promises of the mobile world mean that you’ll be seeing Facebook on both smartphones and lower-end devices far more.

Chief Operating Officer Sheryl Sandberg explained last September that the company’s strategy would be to work on getting Facebook synced up to many different kinds of mobile devices, and that it sometimes requires partnerships and deals. “We want to make Facebook available everywhere on every device,” she said at the time.

“That’s actually complicated in a world of so many cell phones, so many mobile operators…even the screen size is different, so you have to work with the different devices [to develop apps].”

Making these mobile inroads is important as Facebook, which has more than 600 million active users around the world, works to expand in regions where it historically has not had a strong presence. In many of these regions, Internet access happens primarily on mobile devices rather than PCs.

To that end, Facebook recently worked with mobile development firm Snaptu to build an app for lower-end cell phones that will be accessible free of data charges in a handful of overseas markets.

Software brings Android apps to other platforms

Mobile software specialist Myriad is preparing to launch new software that allows non-Android-based smartphones to run apps designed for Google’s mobile operating system.

The software, known as Alien Dalvik, will allow non-Android operating systems to run Android Package (APK) files with little modification, the company said in an announcement on Tuesday.

“The proliferation of Android has been staggering, but there is still room for growth,” said Simon Wilkinson, chief executive of the Myriad Group, in a statement. “By extending Android to other platforms, we are opening up the market even further, creating new audiences and revenue opportunities.”

Read more of “Alien Dalvik brings Android apps to other platforms” at ZDNet UK.

Alcatel-Lucent shrinks cell tower technology

Telecommunications infrastructure maker Alcatel-Lucent announced this week new technology that will help wireless carriers expand their networks to keep up with the explosive growth in mobile data.

The company announced this week a new compact cell phone antenna system called lightRadio, which incorporates radio technology and base station technology in a single box. The entire system, which can fit on a lamp post, is a fraction of the size of today’s cellular equipment. Current cellular networks require massive and power-hungry cell phone towers that house the antennas with a separate base station at the bottom of those towers that control the antennas.

When carriers have needed to add capacity or improve coverage, they’ve had to deploy these massive cell site towers. Alcatel-Lucent’s lightRadio system, which will be ready for carrier trials later this year, allows carriers to deploy new cell sites much faster and less expensively than they have been able to do in the past. It also means that carriers can reduce the electricity used to power the cell phone towers and base stations.

All in all, wireless operators can reduce the cost of deploying and maintaining a new cell site by almost half of what it is today.

That has huge implications for the wireless industry, which is struggling to keep up with demand for more data services from smartphones and tablet PCs. In fact wireless data traffic is expected to increase 26 times between 2010 and 2015 according to Cisco’s latest Visual Networking Index Forecast. Cisco conducts the survey every year to track network growth.

“It’s clear that the explosion in mobile data will continue,” said Wim Sweldens, president of Alcatel-Lucent’s wireless division. “The architecture that Alcatel-Lucent is proposing will help avert a potential wireless crisis. If carriers don’t move in this architectural direction then the problems we are starting to see today will only get bigger. And growing the networks will not be economically viable.”

Wireless carriers have been preparing for traffic increases by adding more capacity to their radio networks as well as their back-haul networks that carry the traffic from the radio towers to the Internet. The wireless industry has been pushing the Federal Communications Commission to make more wireless spectrum available so that they can increase capacity. But getting new spectrum into the market takes time.

One way to add more capacity to the available spectrum is to deploy more cell sites that are smaller in area. Splitting cell sites means that wireless operators can serve more customers or provide more bandwidth to individual customers in each cell site.

Carriers have already begun using a mix of a smaller and smaller cell sites in their networks. For example, femtocells provide personal cell sites that can be in a home or business. The smaller cell sites are connected to a home or office broadband connection to improve wireless indoor coverage.

But splitting cell sites on a macro level in a metropolitan area is a little trickier if the old cell tower and base station architecture is used. Getting new cell towers approved is time consuming. And putting up those towers is expensive. It’s also expensive to run these towers, which means long-term this architecture isn’t viable.

That’s where Alcatel-Lucent says it’s lightRadio technology comes in. It would allow wireless operators to deploy smaller cell sites much more quickly and at a much lower cost.

“We are applying the same principles that we’ve talked about in using femtocells for the entire mobile network,” Sweldens said. “We start by replacing the big towers with smaller elements that are easier to deploy, use less power, and connect smaller sites to broadband infrastructure that is already in place. So we can take advantage of the cloud-like architecture to get better economies of scale that either lead to reducing costs for operators or the ability to deliver more bits at the same cost.”

The new technology has other important benefits as well. Because the antennas are software configurable, carriers can use the same set of equipment to offer 2G, 3G, and 4G service from the same access point. What’s more, upgrading from one technology to another simply requires a software upgrade.

This is very different from what is done now. Today, when wireless carriers upgrade from a 3G technology such as EV-DO or HSPA to a next-generation technology, such as LTE, they are required to deploy new hardware. But with the Alcatel-Lucent lightRadio system, they simply do the upgrade in software.

But Alcatel-Lucent’s new technology, which is modular in design like building blocks in a Lego set, is not just a big improvement for existing wireless players. It can also be used to help other companies, such as cable operators, get into the wireless market at a much lower cost.

Cable companies already have a lot of high-capacity broadband infrastructure in the ground. And some of them also own wireless spectrum licenses. Cox Communications has used some of that spectrum to build a regional wireless network, while others such as Comcast and Time Warner Cable have invested in other wireless services like Clearwire.

“The future for any broadband provider is building one network that can serve customers whether they are mobile or at home,” Sweldens said. “Our new technology will help companies leverage their existing wireline infrastructure to provide wireless services. The cable MSO market is definitely one of our target markets.”

Alcatel-Lucent isn’t the only company that is developing smaller, more modular and wireless configurable cell phone access points. Market leaders, such as Ericsson and Huawei, have also been working on software-defined radio technology. But Sweldens believes that Alcatel-Lucent is the first company to announce plans for these products.

“This is indeed part of a general trend in the industry,” he said. “But what we’ve done is made a breakthrough by building the smaller cubes that fit together. We feel pretty confident that we are the first to commit to such a product road map. And that is the news.”

Report: Google, EC in early settlement talks

Google could be a little closer to resolving at least one of its regulatory headaches, according to a report.

Reuters notes that Google and the European Commission have entered into talks over the antitrust investigation that began last November. It’s still pretty early in the process: Reuters’ source said there were “some tentative discussions in resolving the issue, but no really concrete proposals on the table.”

Google is even more dominant in Europe than it is in the U.S., with market share over 90 percent in a few countries. A few companies, led by Foundem, have long complained that Google unfairly penalizes their sites in search results because they compete with Google, a charge that Google denies.

When it launched the investigation the Commission said that it would investigate those complaints as well as complaints about Google’s quality score for determining ad placement, but said it didn’t necessarily have proof of any wrongdoing. Regulators have been sending questionnaires to Web businesses as part of their effort, as noted by Search Engine Land earlier this year.

The European investigation is the most significant probe of Google’s business practices yet launched, although authorities in the U.S. have been sniffing around the proposed acquisition of ITA Software and the long-delayed ratification of Google’s settlement with author and publisher groups over Google Book Search.

Report: Microsoft management changes in the works

Microsoft is said to be on the brink of another shuffle among its senior management.

Microsoft CEO Steve Ballmer plans to make changes to the company’s senior management in order to improve the company’s competitive edge in Web services, smartphones, and tablet computers, according to a Bloomberg report that cites unnamed sources.

Those changes, Bloomberg says, will be announced “this month”.

What remains unclear is whether the changes will bump out any of the existing division heads, in place of talent from within or outside of the company, versus changing the number of business units and their executive make-up. Bloomberg did say that a central part of the company’s plan was to “promote managers who have engineering chops and experience executing on product plans,” which would imply moving someone at the top to make way for that promotion.

A Microsoft representative declined to comment.

Microsoft has a long history of making changes to its management structure. While Ballmer has stayed at the helm for a little more than 11 years now, the company has made drastic changes to the number and depth of its business units.

Microsoft’s last big management shuffle took place back in October, with Ballmer naming Kurt DelBene to the head of Microsoft’s Office Division, Don Mattrick to the head of the Interactive Entertainment Business, and Andy Lees to the head of Microsoft’s Mobile Communication’s business. That was following the departures of Stephen Elop, who left to become the CEO at Nokia, as well as Robbie Bach, who retired from his spot as the president of the Entertainment and Devices unit last May.

More recently, the company had a shake-up in its Server and Tool Business, with the company announcing the planned departure of Bob Muglia, who had served as president for the division. Muglia had been promoted just two years prior as part of Microsoft’s elevation of the server unit into a larger part of the company’s business.

What should Nokia do?

commentary It’s hard to know what to make of Nokia these days. Though it still holds a huge worldwide market share and sells more phones than its competitors, it doesn’t quite capture the buzz it once had, and its presence in the United States has dwindled.

Sure, the Finns maintain a healthy business selling low-end handsets in emerging markets, but over the last three years, smartphones are where the action is. And though Nokia still succeeds in that space occasionally–we quite liked the Nokia N8, for example–its strategy has been rather unclear.

To its credit, Nokia is aware of the problem. At last September’s Nokia World, company execs vowed to “shift into high gear” and “fight back in smartphone leadership“. How exactly that fight will unfold remains a popular point of debate in the wireless industry–many analysts have urged Nokia to join the Android family–but up until now, Nokia has kept its cards close.

Come Friday, however, Nokia will fully outline its new strategy at an investor meeting in London. CEO Stephen Elop announced the Feb. 11 meeting during the company’s quarterly earnings call. Elop didn’t get specific, but he set off a wave of speculation when he said the company needs to “build or join a competitive ecosystem”.

“The game has changed from a battle of devices to a war of ecosystems,” Elop said during the call. “And competitive ecosystems are gaining momentum and share.” Immediately, some Nokia watchers theorized that the company would announce that it was developing a handset based on Windows Phone 7 or Android.

Such a move would be surprising, considering that as of late the company has been mildly dismissive of Android while continuing to promote Symbian and the developing MeeGo platform. But with the market throttling forward at rapid speed, Nokia may have decided the radical change is necessary. So what could its options be?

Stay with MeeGo
From what I’ve seen, most of my tech journalist colleagues are advocating this path. ZDNet Asia’s sister site ZDNet’s Mary Jo Foley, for instance, doesn’t see an OS switch to Microsoft happening. Similarly, PCMag’s Sacha Segan and Eric Zeman at Information Week also urged Nokia to develop MeeGo as a worthy competitor to Google and Microsoft.

Though I agree that this is the most likely scenario, I can’t say that it excites me. Experienced Symbian users may love Symbian, but the OS can be maddening for everyone else. Sure, Nokia did give Symbian 3 a nice upgrade on the N8, but it needs to do more. And though I’m always a fan of customer choice, MeeGo just doesn’t spark my interest at this point. It could be really cool, and I’m hoping that it is, but Nokia needs to deliver real MeeGo handsets soon.

Android
The most unlikely of the three, I’d say, but still not impossible. Indeed, jumping into Android would entail risks. The OS is growing fast and it’s attracted the attention of major players like Motorola, HTC, and Samsung. Nokia would be arriving late to the party and its rivals will fight to keep the leadership positions they’ve assumed. On the other hand, Nokia could play an “always late, but worth the wait” role.

Windows Phone 7
Honestly, I wouldn’t mind if Nokia went this route while also developing MeeGo. Windows Phone 7 is new and it has its growing pains, but the OS has a lot of promise. Nokia could benefit by getting involve with an OS from the ground up, and Microsoft–which is Elop’s previous employer, by the way–could use the exposure from an industry giant.

Whatever happens, we’ll know for sure this week after Elop breaks the news in London. CNET also will be at Mobile World Congress a few days after that in Barcelona, Spain, where Nokia will kick off its presence at the show by holding a press conference Feb. 13.

This article was first published as a blog post on CNET News.

Microsoft eases procurement of WP7 dev phones

Microsoft is making it a little easier for developers to get their hands on Windows Phone 7 devices for building and testing applications.

In a blog post last week detailing some previously announced updates to the Windows Phone Developer Tools, Brandon Watson, who is Microsoft’s director of developer relations, said that the company has partnered with Zones.com to let developers buy Windows Phone 7 devices without a voice or data contract.

The phones, which include HTC’s HD7 and Surround as well as the Samsung Focus, come carrier-locked, but can be had for about $500 without venturing to a carrier or third-party retail site to make the purchase.

In the past Microsoft has made a concerted effort to get devices into developer hands even before an official launch. At last year’s Professional Developers Conference, all paid attendees were given phones following the keynote speech, a week ahead of the U.S launch. That said, to get additional or replacement devices, Microsoft had been encouraging developers to go through carriers, where contract strings were attached.

Momentum builds
Watson also provided an update to the number of Windows Phone 7 developers and apps within its library, and there have been noted improvements in just a week’s time.

Microsoft now says the number of registered Windows Phone developers is 27,000, up from the 24,000 metric the company cited a week ago. Those developers have also bumped up the number of apps on Microsoft’s Windows Phone Marketplace to “more than 7,500,” marking an increase of 1,000 apps since last week.

Microsoft has yet to provide concrete numbers on overall app downloads, though during the company’s CES keynote address, CEO Steve Ballmer said that more than half of Windows Phone 7 users were downloading a new application every day. By comparison, competitor Apple announced it had topped 10 billion application downloads in its less-than-three-year-old app store last month.

Google wants to fight smartphone battle on Web

Google has been playing catch-up to Apple in the mobile world for several years, but it’s starting to carve out its own niche by emphasizing its strength on the Web.

The Android Market Web Store was the most interesting thing to emerge from last week’s event at Google headquarters, and it’s one that Apple can’t easily duplicate overnight. It’s also in keeping with Google’s philosophy of pushing Web development over native software development when possible, a strategy that isn’t always practical on smartphones but is starting to make more sense as computing power grows in tablets.

Most importantly for Google, it gives Android users a cleaner, simpler, and more user-friendly option for buying apps than the much-maligned Android Market. It should also appeal to developers, who will have many more options at their fingertips for promoting their apps on the store and a better chance of being found within the sea of applications.

The advantages of the Android Market Web Store are simple: Android users can browse app selections just like any other Web site from any Web-connected device, rather than dealing with the small, cluttered, and awkward Android Market interface on their phones. A purchased app is linked with a Google Account rather than a device, so it can be automatically pushed to any Android devices registered to that account at the time of purchase.

And Google has also come up with something that hits Apple where it hurts: Web services. For all its skill in designing mobile hardware and software, Apple hasn’t been able to come up with all that many services that tie everything together over the Web. (Find My iPhone is a notable exception, but that requires a US$99 annual subscription to MobileMe while iPhone 4 users with iOS 4.2 installed can get this for free.)

Apple’s iTunes is the hub for its mobile strategy, and even the most diehard Apple fan would admit that desktop application is getting a bit long in the tooth. iTunes has given Apple an centralized distribution and payment-processing system that’s arguably as responsible for the growth of iOS as anything, but it’s resource-intensive and linked to a single computer: you can manage and purchase apps on the iPhone or iPad, of course, but if you want to back them up, you have to physically connect the device to a computer.

Google has long sought to eliminate that link with its Android strategy, pitching its Web-based services as a selling point for those concerned about app backup and contact management. However, it didn’t really have a credible alternative to the ease-of-use that accompanies app shopping on a bigger screen, not to mention the rather poor experience in the native Android Market. Now it does.

Eric Chu, mobile platforms product manager for Google, said that the Web Store won’t replace the native Android Market on phones and tablets as yet. He said Google will continue to make improvements to the native store because that’s still probably the best experience on phones.

But Google’s quest in this world is to one day replace software developed for specific machines with software developed on and for the Web. Mobile devices lag behind their desktop counterparts when it comes to supporting this kind of strategy (and even desktops aren’t all the way there) but as standards get sorted out and mobile browsers become more powerful, the conditions needed to allow that to happen will start to come together.

This is also a powerful differentiator for Google and its partners. By emphasizing Android’s hooks into Google’s broader array of Web services, Google gives its partners a selling point that others can’t match without a great deal of investment in skills that aren’t necessarily complementary to those of mobile operating system developers and industrial designers.

It’s not exactly a game changer, but it’s a nice example of how the many companies trying to live up to the high bar set by Apple with iOS can score points by knowing their strengths and focusing on sore points in the iPhone and iPad experience.

Now if Google can address some of the sore points in the Android experience–such as the slow pace of operating system updates actually reaching phones, for one–it might start setting the pace on its own.

This article was first published as a blog post on CNET News.

Cisco sees 26-fold wireless data increase in 5 years

Wireless carriers will see mobile data traffic increase 26 times between 2010 and 2015 according to Cisco’s latest Visual Networking Index Forecast. Will wireless operators be ready for it?

That’s the big question. The prediction of steep increases in traffic load are not entirely unexpected. Wireless carriers have been preparing for traffic increases by adding more capacity not only to their radio networks, but also in the back-haul networks that carry the traffic from the radio towers to the Internet.

By 2015, Cisco says that mobile data traffic will grow to 6.3 exabytes of data or about 1 billion gigabytes of data per month. The report indicates that two-thirds of the mobile data traffic on carrier networks in 2015 will come from video services. This trend follows a similar trend in traditional broadband traffic growth. And it suggests that as wireless networks get faster, devices get more processing power with bigger and better screens, people will increasingly watch more video on the go.

“What we’re seeing here is true convergence,” said Doug Webster, Cisco’s senior director of worldwide service provider marketing. “We’ve talked about this for a long time, but it’s really starting to happen where people are doing all the things they used to do on broadband connections at home when they’re on-the-go.”

But according to Cisco’s results, mobile data traffic is actually growing faster than traditional landline-based broadband traffic. In 2010 data traffic grew 159 percent, which is roughly 3.3 times faster than traditional landline broadband. And it was higher than the 149 percent growth rate Cisco had predicted in earlier Visual Networking Index reports. But over the next five years, the growth should taper off, Cisco’s report indicates. For example, annual growth rates are expected to go from 131 percent in 2011 to 64 percent in 2015.

So what exactly is driving the growth? The first main driver is the proliferation of mobile devices, said Suraj Shetty, a Cisco marketing vice president. Last year, Cisco’s Index predicted that the smartphone installed base would increase 22 percent in 2010, but Informa Telecoms and Media data indicates that the number of smartphones in use grew by 32 percent during the year, Cisco said.

In addition to the increase in smartphone adoption, there was a sharp increase in those smartphones that have the highest usage profile: iPhones and Android phones. The number of iPhones and Android devices in use grew 72 percent in 2010, bringing the combined iOS and Android share of smartphones to 23 percent, up from 11 percent in 2009.

And the trend is only expected to continue, especially as devices other than smartphones are added to the mix. By 2015, there are expected to be 5.6 billion mobile devices and 1.5 billion machine-to-machine devices in the world. These devices will include mobile phones, as well as Internet-connected cameras, Net-connected cars, tablets, laptops and more devices.

In addition to simply having more devices connected to wireless networks, more of these devices will also have better computing capabilities, Shetty added. We’re already starting to see this with smartphones running dual-core processors. The screens on mobile devices are also getting bigger and sharper. Not only are tablets coming on the scene, but smartphones themselves are getting larger and will have greater computing capacity than devices available today.

Network speeds are also increasing as wireless operators move to new generations of technology. In the U.S. wireless operators are talking about their “4G” wireless networks, which can offer download speeds anywhere between 5Mbps and 20Mbps, depending on the technology used. T-Mobile USA and AT&T have their HSPA+ networks. And Verizon Wireless has its LTE network. (AT&T also plans to launch an LTE network this year.) And Sprint Nextel has its WiMax network.

Cisco’s report indicates that network doubled in 2010 and speeds will only increase over the next five years with the average download speeds expected to increase 10-fold by 2015.

The faster speed networks, more capable devices with better screens, and the plain fact that there will be more connected devices in five years, means that wireless consumers will use more resources.

“There will be more devices with bigger screens and better processors that allow for multiple apps to run simultaneously, and the predominant type of network traffic will be video,” Shetty said. “These trends are all coming together and will have a significant impact on the network.”

What it means for wireless operators is that they need to find a way to keep up with the growing demands on their networks. In the wireless world, the need to keep up with growing demand means a need for more wireless spectrum. Carriers such as T-Mobile say they have enough spectrum today to meet current growth projections. But they say more is needed down the road.

This is why the Federal Communications Commission is working to get an additional 500MHz of wireless on the market in the next decade with a plan for 300MHz spectrum to be freed up in the next five years.

But adding more spectrum takes time and it will not be enough to solve the capacity crunch that wireless operators will likely face in the next few years. Shetty said that wireless operators will have to get more efficient in how they use their network resources. Shetty said that Cisco has technology that can help wireless operators improve network efficiency.

“There are lot of demands and challenges that carriers face to keep up with demand,” he said. “Cisco can help them better engineer the network. And allow them to scale the network.”

But carriers will also have to invest in other network technologies to help keep up with demand. This will likely include offloading traffic onto femto cells and Wi-Fi networks.

It may also mean shifting business models to encourage consumers to use mobile data more efficiently. Last June, AT&T eliminated its unlimited data plan and began offering a tiered data service offering with usage caps. Other wireless operators in the U.S. haven’t followed yet. But Verizon Wireless, the largest U.S. wireless operator, has indicated that it will move to tiered pricing. Whether it gets rid of an unlimited entirely is still unknown. But it’s likely the company will raise the price of unlimited if it keeps it all.

The other wild card in this whole scenario are tablets and other connected devices. While more people in the world today have cell phones than have electricity, devices such as tablet PCs will eat up capacity even further, because they can do so much more than many mobile handsets.

It doesn’t take nearly as many tablets in the world to have a significant effect on network loads. For example, a smartphone generates about 24 times more data on a wireless network than a basic feature phone. But a tablet generates about 122 times more data consumption than a basic feature phone, according to Cisco.

Webster said a year ago, tablets weren’t even on the radar screen when it came to predicting future mobile data growth. But with the introduction of the Apple iPad last year and now a growing number of tablet PCs, the category is expected to have a significant effect on data usage patterns in the next five years.

“Last year there was zero data traffic on the network from tablet PCs,” Webster said. “And it went from basically nothing to being a significant contributor to mobile network traffic by 2015. This is just indicative of how dynamic this market is with one type of device ramping up so quickly. It has huge architectural implications.”

Yahoo apologizes for Windows Phone 7 data bloat

Yahoo on Tuesday offered an apology to Windows Phone 7 users affected by an inefficiency that left some with larger than usual data usage.

The data problem had cropped up shortly after the launch of Microsoft’s latest mobile venture, with some users finding their allotted cellular data use going up to an unusually high rate. Microsoft acknowledged the problem in mid-January following a query from the BBC, and later said that it was an unnamed third-party’s fault.

Last night Microsoft fessed up that Yahoo was that third-party, and that the issue centered around its Web mail service. This was following a packet sniffing investigation by tech blogger Rafael Rivera, who discovered Yahoo was sending back larger than usual amounts of data every time the phone checked for new mail.

“Tens of millions of people check their Yahoo Mail from their mobile device each day, and we know they want their mobile mail experience to be fast, rich, and real-time,” Yahoo said in a statement. “While our default settings on all mobile platforms realize this approach, we have determined that an inefficiency exists in the synchronization of e-mail between Windows Phone Mail clients and Yahoo Mail, which can result in larger than expected data usage for some users.”

Yahoo reiterated that a fix for the problem was on the way, and will be here “in the coming weeks”, but that for now users needed to dial back how often Windows Phone 7 devices check for mail updates. Yahoo also noted that the data issue was not affecting other phone platforms and apologized for any inconvenience to users.

There’s no word yet on whether the fix can be made without users having to update their phone’s system software. Yahoo did not immediately respond to a request for clarification on that issue.

Lawsuit: AT&T overbills iPhone data use

One of the biggest problems that consumers have faced with mobile phone billing in recent years is that there’s really no way of independently measuring the amount of data that’s being consumed by a mobile Web session. Consumers are at the mercy of the wireless carriers and have put their trust in these providers to accurately bill them.

Now, AT&T finds itself at the center of a class action lawsuit that alleges that the provider’s bills “systematically overstate the amount of data used on each data transaction”. Granted, the overstatement that’s being alleged is small–somewhere in the range of 7to 14 percent monthly, according to a post on the Electronista blog.

What’s especially telling is how a consulting firm that was hired by the lawyers of the plaintiff conducted its own test of the data billing. Instead of using data and trying to measure it independently for comparison against the bill, the consultant did the exact opposite. The firm bought a new iPhone and immediately turned off all push notifications and location services, made sure that no apps or email accounts were active and then left the iPhone idle for 10 days.

Read more of “Lawsuit: AT&T “systematically overstates” data usage on iPhone bills” at ZDNet.

eBay snags Bing’s development manager, Facebook scientist

Adding to the list of recent departures, Microsoft has lost the principal development manager of its Bing search engine to commerce giant eBay.

According to All Things Digital, Scott Prevost who joined Microsoft as part of the Powerset acquisition in 2008, has left to become the VP of product management for eBay’s search tool. He’s joined by now former Facebook research scientist Dennis DeCoste, who will be eBay’s director of research. Together, the pair are said to be working on improving the relevancy of eBay’s built-in search tool.

A Microsoft representative confirmed Prevost’s departure, and said “we wish him well in his future endeavor”.

Prior to his two-year stint as the GM and director of product for Powerset, Prevost had been the CEO and CTO at the Animated Speech Corporation, which merged with educational software and research company TeachTown in 2006. As for DeCoste, he too had been a Microsoft employee, though had worked as a principal scientist for the company, following his stint as the director of research for Yahoo’s Research group.

Prevost joins a handful of recent departures from Microsoft’s management and engineering ranks. Earlier this month, Microsoft announced that server and tools boss Bob Muglia would be leaving the company later this year. More recently, Brad Brooks, who served as corporate vice president in Microsoft’s Windows Group left the company to join Juniper Networks. Meanwhile, Matt Miszewski–the former general manager of Microsoft’s government business–left Microsoft for Salesforce.com in late December, though was temporarily blocked from taking his post as a VP due to Microsoft winning a restraining order based on non-compete and confidentiality agreements Miszewski had signed. There’s also Johnny Chung Lee, the Wii hacker Microsoft hired to work in its Applied Sciences group to develop Kinect algorithms, who jumped ship for Google earlier this month.

Motorola Solutions rides Asia’s urbanization wave

As people become more affluent in rapidly growing Asian markets including China and India, government spending on areas such as public safety and train systems is likely to increase–providing opportunities that Motorola Solutions is looking to tap for continued growth.

Phey Teck Moh, corporate vice president at Motorola Solutions Asia-Pacific, noted that as Asia’s economies expand, a bigger middle class will emerge and this group of people will demand better public safety and transport systems, to name a few focus areas. This, in turn, will force governments to improve either the equipment used or increase the number of devices to support the demand, Phey said in an interview with ZDNet Asia.

He cited the example of walkie-talkies used by the police, where one device is shared by 7 to 10 officers in emerging markets. With urbanization, the number of officers sharing one radio set will be reduced to 3 to 4 policemen, he said.

Metro systems, he added, is another growth area Motorola Solutions is eyeing. According to Phey, China has approved and is deploying 58 railway lines, with plans to lay out another 100 to 150 lines in the next 10 years.

A report by Chinese news agency, South China Morning Post, said the Chinese government has pledged 1.25 trillion yuan (US$189.75 billion)–stretching from 2011 to 2015–to build 2,200 kilometers of rail lines in 16 cities.

Phey said: “Every mega first-tier city is growing its second- and third-tier cities, and it’s not just in public safety and transport. Retail and healthcare industries are also expected to grow in the process.”

Cashing in on enterprise mobility
The Singapore-based Motorola executive noted that while consumer mobile devices for white-collar workers are currently hogging the limelight where enterprise mobility is concerned, “true” enterprise mobility is actually more keenly felt in the blue-collar workers’ domain. These sectors include logistics, delivery and repair, among others, he said.

Citing figures from research firm IDC, Phey said the number of mobile workers accessing enterprise systems worldwide will top 1 billion this year and reach 1.2 billion by 2013. Asia-Pacific markets will contribute the most significant gains, although the United States will remain home to the world’s most mobile workforce, he said.

With more workers becoming mobile, it is imperative that ruggedized mobile devices they use “function properly over a long period of time, are compatible with existing apps even as the operating system is refreshed constantly and that security features are in place”, he said.

He noted that 15 percent to 20 percent of non-ruggedized consumer devices fail in their first year of operation. “Field devices such as mobile scanners are very hot now and this is because of the ongoing workforce mobilization trend,” he added.

According to Phey, Motorola Solution’s business proposition is now clearer following the split from its mobile devices business on Jan. 4.

It now has a “strategic flexibility” that allows the company to conduct relevant research and development projects, and attract investors that appreciate its long-term, low-volatility growth. In fact, it will be investing US$1 billion globally in R&D projects that are focused on the company’s core capabilities of offering “mission- and business-critical” communication networks combined with applications and services, he stated.

Lawsuit roils on
Asked if its business is affected by the lawsuit initiated by Chinese networking company Huawei Technologies, Phey said no. Motorola does not play in the same service provider spaces of selling network equipment as Huawei, and it does not have sensitive information to pass on to Nokia Siemens, he explained.

However, Phey’s comments came before the Chinese company gained the upper-hand when a U.S. court granted it a temporary restraining order. Huawei had earlier sued Motorola to prevent it from passing on confidential information about Huawei’s technology to Nokia Siemens, which is attempting to buy over Motorola’s wireless networks business in a deal worth about US$1.2 billion.

Motorola since 2000 has been reselling Huawei radio access gear for GSM and UMTS wireless networks. As part of this relationship, Motorola employees are trained to sell and troubleshoot Huawei’s wireless products. Nokia Siemens also sells wireless products that rival Huawei’s offerings.

Mobile broadband is killing free Wi-Fi

After spending two weeks in Japan scrounging for free Wi-Fi, I’ve come to the conclusion that mobile broadband is killing free Wi-Fi.

In seeking to avoid monster costs for global roaming while I was abroad, I disabled that feature on my phone before I left, meaning I was entirely reliant on Wi-Fi to get in contact with friends and family back home.

In Australia, free Wi-Fi is generally available at stores like McDonald’s and Starbucks, as well as the ever-reliable Apple. Apart from using my iPad (which is the Wi-Fi model), I have little use for free Wi-Fi within this country because my 3G download quota with Optus for my iPhone is generally sufficient for all my internet needs, so I had not paid too much attention to what was available.

But prior to departing for my trip earlier this month, I thought I should research what Internet facilities were available. It was a bleak view to say the least, but I was optimistic because my accommodation provided free Internet and the Apple stores were a last resort, so it would all be good.

When I landed in Japan, I found that McDonald’s and Starbucks generally didn’t have any free Wi-Fi and the stores that did offer Wi-Fi often opted for paid services. The most common I found was BB Mobilepoint, a consortium of telcos that offers connections through hotspots mostly at train stations around Tokyo.

Handy for locals, sure, but not so much for tourists. In Australia, Telstra has a similar program in place with its hotspots.

When I was visiting the sights in Akihabara, the “electric town” in Tokyo that boasts dozens of stores with all the computer and high-tech gear you could ask for, I discovered that most of these stores sold WiMax broadband dongles and it was clear looking at the signs around town that most internet access would be through those.

When I did find places with Wi-Fi (the Wired cafes in Ueno and Shibuya, for example), I would often spend at least an hour or so there, and have a full meal at the same time, so I agree with Darren Greenwood that it is a smart business decision for stores to make the investment in free Wi-Fi.

I could only come to the conclusion that because most of the locals in Japan had existing mobile Internet accounts, free Wi-Fi was less of a pressing issue for them, so it wasn’t as worthwhile for more businesses to offer free Wi-Fi to its customers. 3G killed the free Wi-Fi star.

After my experience in Japan, I could only think of how it would affect tourists visiting Australia, and I think it would be great to see our telcos team up to offer Wi-Fi services in areas where their 3G networks are lagging, and also invest in offering a free (or cheap) alternative for tourists who lack the ability to access it.

Or the telcos could look at reducing the incredibly outrageous global roaming costs, so we wouldn’t need to scrounge for free Wi-Fi. But somehow, I still think that’s a long way off.

This article was first published at ZDNet Australia.

Egypt’s Internet disconnect reaches 24 hours

Egypt’s unprecedented Internet disconnection has now lasted 24 hours without signs of ending.

At this time of reporting, one by one, the country’s electronic links to the outside world fell silent. It started at 2:12 p.m. PT with the mostly state-owned Telecom Egypt disabling its networks, with four smaller network providers following suit between 2:13 p.m. PT and 2:25 p.m. PT.

Egyptian President Hosni Mubarak appeared on state television at approximately 2:15 p.m. PT last Saturday to announce that he would sack his cabinet but would not resign–an indication that no end to the disconnect was near. “I will not be lax or tolerant,” he said, according to an Al Jazeera English translation. There’s a fine line, he said, between permitting free speech and allowing chaos to spread.

Last Friday’s network disconnection was followed soon after by mobile networks pulling the plug as well. Vodafone confirmed in a statement that “all mobile operators in Egypt have been instructed to suspend services in selected areas”. So did Mobinil, the country’s largest mobile provider. (See ZDNet Asia sister site CNET’s previous coverage.)

Those outages come as four days of clashes between security forces and tens of thousands of protesters continued on the streets of Cairo and other major cities, despite an official curfew in effect Friday evening. Tanks have taken up positions around some TV stations and foreign embassies, and Al Jazeera English is reporting that the end of three decades of autocratic rule by Mubarak may be nearing.

United States Secretary of State Hillary Clinton said in a speech earlier that “we urge the Egyptian authorities to allow peaceful protests and to reverse the unprecedented steps it has taken to cut off communications”.

“We think the government, as many of us have said throughout the day, need to turn the Internet and social-networking sites back on,” White House press secretary Robert Gibbs said. He added: “Individual freedoms includes the freedom to access the Internet and the freedom to–to use social-networking sites.”

Egypt’s Internet connections aren’t completely down: the Noor Group appears to be the only Internet provider in Egypt that’s fully functioning. Cairo-based bloggers have speculated that its unique status grows out of its client list, which includes western firms including ExxonMobil, Toyota, Hyatt, Nestle, Fedex, Coca-Cola, and Pfizer, plus the Egyptian stock exchange.

An analysis posted by network analyst Andree Toonk, who runs a Web site devoted to monitoring networks, shows that before the outage, there were 2,903 Egyptian networks publicly accessible via the Internet. Today, there are only 327 networks.

A chart prepared by European networking organization RIPE provides a detailed glimpse at how Egypt’s network went dark. Until yesterday afternoon, there was the normal noise of networks being added and deleted, followed by a sharp spike yesterday between 2 p.m. and 2:30 p.m. ET. There’s been virtually no activity since.

Before last Friday’s outage, Egyptian use of the Tor anonymizing network had experienced a dramatic spike that coincided with the beginning of widespread protests. Normal usage was hovering around 400 users a day, but leaped to more than 1,200 as of Jan. 24. (Here’s a different view.)

Contrary to some reports, however, there’s no evidence that Syria’s Internet connection is down. Compare this chart from an Egyptian provider showing the network going completely dark with this one from the government-owned Syrian Telecommunications Establishment that depicts normal activity.

The rumors about Syria originated a few hours ago when Al Arabiya news service said that “Syria suspends all Internet services,” and followed up with a denial from the authorities. Reuters reported earlier this week that Syrian authorities have banned programs that allow access to Facebook Chat from cell phones.

There are some parallels. The now-defunct HotWired site, succeeded by Wired.com, reported in 1996 that “the U.S. government has quietly pulled the plug on Iran’s Internet connection”. During a state of emergency in Bangladesh in 2007, satellite providers were ordered to cease airing any news shows. And in Burma later that year, the country’s ruling military junta pulled the plug on the nation’s limited Internet access.

But Burma is not Egypt, a country of more than 80 million people equipped with tens of millions of computers and cell phones–who have now found themselves almost entirely disconnected from the rest of the world.

Egypt receives more than US$1.3 billion annually from U.S. taxpayers in the form of military aid, according to the U.S. State Department.

“Thanks to the blanket communications shutdown, the protests today took place in an information vacuum,” according to a dispatch from Index on Censorship’s Egypt regional editor Ashraf Khalil in Cairo. “On Tuesday, even during the demonstration, everybody was checking Twitter both to coordinate and for news on what was happening across the country. This time nobody knew what was happening anywhere else–not even on the other side of the river in Tahrir Square.”

This article was first published as a blog post on CNET News.

Amazon’s capital spending plans spur debate, worry

There’s quite a tug-of-war underway over Amazon’s capital spending plans. Amazon reported a solid fourth quarter, but also added that it will continue to invest in fulfillment centers and infrastructure to build up Amazon Web Services.

Enter the worrywarts. Amazon is a bit of a conundrum for Wall Street. When the company is harvesting its investments, investors love it. But when the company’s outlook disappoints because it is spending on infrastructure some analysts freak. It’s a familiar pattern with tech companies:

When Verizon said it would do something crazy like bring fiber-optic lines to homes for its FiOS network, there were a few quarters of disbelief. Why would Verizon do that? Today, analyst yap all the time about Verizon’s future proof network. Of course, they also want to see better returns out of FiOS.

Read more of “Amazon’s capital spending plans spur debate, worry” at ZDNet.

Rivals weaken Nokia, Motorola Mobility outlooks

Both the shares of Nokia and Motorola fell amid dismal forecasts for the first quarter, according to reports, undermining their leaders’ efforts to boost handset sales as the onslaught from Apple and Android continues.

Bloomberg reported on Friday that Motorola Mobility–the mobile devices arm following the Jan. 4 split from its networking division–dropped 12 percent on the New York Stock Exchange to US$30.51.

The company predicted a first-quarter loss of 9 to 21 cents per share as sales slowed due to Verizon Wireless’ impending iPhone launch next month, the report added.

Analysts polled by Bloomberg expressed a more positive outlook, though, forecasting a 1 US cent profit per share.

Finnish handset maker Nokia also saw its shares slip after CEO Stephen Elop acknowledged it was facing “some significant challenges in our competitiveness and our execution”, according to a separate Bloomberg report.

Nokia’s share tumbled 8.7 percent in Helsinki, and closed closing 0.8 percent lower at 7.74 euros (US$10.60), it stated.

Both reports had industry voices bemoaning the bleak financial outlooks of the two companies.

London-based analyst Pierre Ferragu from Sanford C. Bernstein, for one, called Motorola Mobility’s outlook “slightly disappointing” and expressed concerns that Motorola’s growth potential is limited by the company’s footprint.

Meanwhile, Leon Cappaert, fund manager at KBC Asset Management in Belgium, which has investments in Nokia shares, similarly expressed anxiety over the Finnish company. “What spooks everyone is the outlook: a combination of lack of giving an upside and disappointing margins,” he said.

Apple and Android loom large
Both Motorola Mobility CEO Sanjay Jha and Elop are looking to fend off competition from Apple’s iPhone and Google’s Android-based smartphones, Bloomberg noted.

For Motorola Mobility, competition will intensify once Verizon begins sales of the iPhone. The carrier is one of Motorola’s staunchest allies, selling more of its phones than other U.S.-based carriers, the news wire said.

“We have seen some slowdown as a result of the announcement at Verizon,” Jha said in the report, adding that “Android’s popularity will help [Motorola] compete with Apple”.

He also revealed that Motorola expects to ship between 20 to 23 million smartphones and tablets in 2011, and that Xoom, its first tablet, will be competitively priced to take on more expensive models like the iPad from Apple.

Since adopting the Android OS for its mobile devices, the company’s sales have gotten a shoot in the arm, culminating in the company’s return to profit for the first time since 2006, the report noted.

Nokia on the ropes
Nokia, on the other hand, are in more dire straits with neither analysts nor investors holding out any hopes for an improvement in company’s fortunes, Bloomberg stated.

Alexander Peterc, an analyst with Exane BNP Paribas, said that he expected downgrades between 15 and 20 percent per share for first-quarter earnings and between 5 and 15 percent for full-year earnings, “depending on how negative people get”.

Fellow analyst, Andy Perkins from Societe Generale Corporate & Investment Bank, said that Nokia itself is predicting a difficult first quarter that is “certainly much tougher than the markets were hoping for”.

Analysts are expecting Nokia to ditch Symbian for either Google’s Android or Microsoft’s Windows Phone 7 OSes, said a New York Times report.

However, Nokia announced last December that Symbian will continue to be its main business-phone platform, even when its new top-end OS, MeeGo, is launched.

However, devices powered by MeeGo OS have yet to enter the market, Bloomberg pointed out.

” If we rush to market with something that is below what our brand should stand for, then we will do long-term harm,” Elop explained in the report. The CEO added that he will lay out his strategy for the company at Nokia’s investor meeting in London on Feb. 11.

News Corp.’s iPad magazine launching Feb. 2

News Corp. has chosen Groundhog Day for its launch of The Daily, a digital publication designed for tablet devices–and it’s chosen New York, not the previously rumored San Francisco, for the Feb. 2 event.

News Corp. CEO Rupert Murdoch will be making the announcement at the event at the Solomon R. Guggenheim Museum, and Apple Vice President of Internet Services Eddy Cue will join him. This is in contrast to News Corp.’s initial plans to hold the event at the San Francisco Museum of Modern Art in late January.

A source close to the matter had informed ZDNet Asia’s sister site CNET that Apple had a significant part in the decision-making process for The Daily’s launch, and that Jobs would be joining Murdoch to make the announcement. Apple fans closely followed the rumors of a close partnership between Apple and News Corp., hoping that it might provide some insight into Apple’s strategy about how it sees the iPad as a device for digital media consumption. A Jobs appearance at the launch of The Daily would be a big deal indeed.

But on Jan. 17, a day before the company’s quarterly earnings announcement, Jobs announced that he would be stepping aside on a medical leave. While Jobs–a pancreatic cancer survivor who has already taken one medical leave from his post–will remain CEO, chief operating officer Tim Cook will temporarily take over his duties at the company.

So The Daily will launch without Jobs. Cue, a longtime Apple exec, has been instrumental in the development of the iTunes Store, App Store, and the future of applications on the iPad.

The Daily, which News Corp. hired former MTV digital executive Greg Clayman to spearhead, will be the second high-profile tablet-based publication to be launched by a billionaire mogul. In late November, British entrepreneur Richard Branson’s Virgin Group released Project Magazine, a slick monthly lifestyle publication for the iPad. No Apple executives made appearances, but vice president of product marketing Michael Tchao was in the audience and chatting with attendees afterward.

At the time, The Daily’s launch was rumored to be imminent–but it’s taken another three months to finally get it up and running.

A notably smaller tablet publication company, Nomad Editions, launched earlier this week. It’s run by Mark Edmiston, former president of Newsweek magazine.

This article was first published as a blog post on CNET News.

Are online polls reliable enough?

2011 is election year in New Zealand and this week, Prime Minister John Key and Labour Leader Phil Goff set out their stalls along with Obama-style “state of the nation” speeches.

The pollies will be eyeing upcoming opinion polls, but can the polls be trusted anyway, especially the ones that use online polling?

Despite some success overseas, I doubt such polls are mature enough to be trusted here yet, at least for political polling, even if they do have acclaimed merits of speed and cost.

The Fairfax-owned Sunday Star-Times has begun using Horizon Research, a company that uses online panels.

But its findings have been so far out of line with the others that the polls’ credibility is often questioned.

The polls in New Zealand, including those conducted by Australia’s Roy Morgan, have tended to show National and its coalition government way out in front, but Horizon keeps showing it in danger of losing its majority.

This has led fellow pollster, David Farrar of Kiwiblog to write a post talking about how trustworthy or untrustworthy polls can be.

Admittedly, Farrar’s own market research company, Curia, often conducts polls for the ruling National Party, so he might be biased. But his comments seem fair, especially noting the longstanding records of rival pollsters and these rivals all producing similar results.

So while online polling, especially if you rely on volunteers, is cheaper, perhaps you only get what you pay for. The phone polling or face-to-face interviewing does seem more accurate, especially with random sampling and other weighting. Horizon says it samples and weighs its panels but one could question if it is doing it properly.

Yet pollsters in Australia have been assessing their methodology, with even Roy Morgan testing online methods, though it prefers interviewing people face to face.

Galaxy, which also operates in Australia, seems happy with its online methods, though in Australia, it uses telephones and random sampling for its federal voting intention surveys.

YouGov is another online pollster and is used and trusted by major UK papers, as well as the Economist.

YouGov claims a good accuracy record, citing large sample sizes in its polling — numbers far higher than New Zealand’s own Horizon Research.

Maybe this is one of the many things Horizon needs to look at.

Of course in the end, there is only one poll that counts: Election Day. Only then will we truly know who is right!

This article was first published at ZDNet Australia.

Reports: Internet disruptions hit Egypt

Amid a third day of anti-government protests, Internet outages and disruptions were reported today in Egypt, according to reports.

Facebook and Twitter confirmed the reports for their sites.

“We are aware of reports of disruption to service and have seen a drop in traffic from Egypt this morning,” a Facebook spokesman said in a statement. “You may want to visit Herdict.org, a project of the Berkman Center for Internet & Society at Harvard University that offers insight into what users around the world are experiencing in terms of web accessibility.”

According to Herdict.org, there were 459 reports of inaccessible sites in Egypt and 621 reports of accessible sites.

Twitter’s Global PR account reported on the site that: “Egypt continues to block Twitter & has greatly diminished traffic. However, some users are using apps/proxies to successfully tweet.”

Meanwhile, there were numerous reports of outages around the Web.

Danny O’Brien, San Francisco-based Internet Advocacy Coordinator for the Committee to Protect Journalists, reported to the North American Network Operators’ Group (NANOG) e-mail list that the organization had lost all Internet connectivity with its contacts in Egypt and was hearing reports of loss of Internet connectivity on major broadband ISPs, SMS outage and loss of mobile service in major cities there.

“The working assumption here is that the Egyptian government has made the decision to shut down all external, and perhaps internal electronic communication as a reaction to the ongoing protests in that country,” he wrote. His post included a link to a Pastebin.com page where someone at a European-based Internet activist group has started an effort to provide alternative methods — such as shortwave and pirate radio — for protesters in Egypt to communicate with each other and the outside world.

“A major service provider for Egypt, Italy-based Seabone, reported early Friday that there was no Internet traffic going into or out of the country after 12:30 a.m. local time,” the Associated Press reported. “Associated Press reporters in Cairo were also experiencing outages.”

The Los Angeles Times reported that BlackBerry users were not able to reach the Internet on their devices.

RIM provided this statement when asked for comment: “We can confirm that RIM has not implemented any changes that would impact service in Egypt and that RIM’s BlackBerry Infrastructure has continued to be fully operational throughout the day. For questions regarding a specific network in Egypt, please contact the carrier who operates the network.

A Twitter post by Ben Wedeman, CNN senior correspondent in Cairo, around 3 p.m PDT says: “No internet, no SMS, what is next? Mobile phones and land lines? So much for stability.”

The Arabist blog had mixed reports, with someone in Cairo saying Internet service was down while a foreign journalist was able to get onto the Internet Semiramis Intercontinental hotel.

Twitter representatives did not respond immediately to an e-mail request for more information.

The Internet disruptions spurred activist action. Anonymous, the group that launched distributed denial-of-service attacks on Web sites of financial institutions and others opposing WikiLeaks last year, released a video online in which it threatened to launch DOS attacks on Egyptian government Web sites if the authorities did not curtail censorship efforts. Earlier today, five people were arrested in the U.K. in connection with those attacks.

Because Twitter has been found to be an effective communications tool during social unrest and protests–in Iran and Moldova, along with Tunisia and Egypt, more recently–it is an attractive target for governments to try to block, along with Facebook.

This article was first published as a blog post on CNET News.

Asia to lead mobile-only Web population

As the mobile broadband market continues its rapid growth, the population of users that use only their mobile devices to access the Internet will hit 1 billion by 2015, with Asia-Pacific dominating this segment of the market.

According to an Ovum study released Thursday, by 2015, some 28 percent of all mobile broadband users worldwide will use this form of connectivity as their only mode of Internet access.

Additionally, more than half of this population will be based in the Asia-Pacific region, which will account for 518.4 million mobile broadband users in 2015, up from 119.1 million in 2011. The region’s market dominance is primarily due to the lack of fixed-line infrastructure in populous markets such as China and India, Ovum explained.

“Asia-Pacific’s role is extremely important in the fixed-mobile services (FMS) space,” Nicole McCormick, senior analyst at Ovum, said in the report. “The region has the third-highest penetration rate, at 34 percent, as well as the fastest-growing mobile-only [broadband] penetration of any region.”

Fixed broadband to grow, too
Despite the growing mobile broadband adoption, the takeup rate for fixed broadband will still see growth, Ovum pointed out. This is because broadband fixed-mobile convergence (FMC) services, which encompasses users who buy both fixed and mobile broadband services, are expected to spike by 120 percent globally in the next five years to 2015.

The report added that FMC users from the Asia-Pacific region will grow from 259 million in 2011 to 465 million by 2015.

McCormick noted that in absolute terms, the region dominates the global FMC market due to the presence of China, South Korea and Japan–all of which have significant fiber-optic deployments and are large broadband markets.

“Bundling opportunities in Asia-Pacific are expected to gather pace over the forecast period as some operators continue to seek ways to protect their fixed-line revenue bases,” she said.

On a macro level, the International Telecommunication Union (ITU) reported on Wednesday that the global Internet population will hit the 2-billion mark this year.

Broadband was cited as the growth catalyst, with ITU Secretary-General Hamadoun Toure noting that the technology “generates jobs, drives growth and productivity, and underpins long-term economic competitiveness”.

US senator proposes mobile-privacy legislation

U.S. federal law needs to be updated to halt the common police practice of tracking the whereabouts of Americans’ mobile devices without a search warrant, a Democratic senator said Wednesday.

Ron Wyden, an Oregon Democrat, said it was time for Congress to put an end to this privacy-intrusive practice, which the U.S. Justice Department under the Barack Obama administration has sought to defend in court.

In a luncheon speech at the libertarian Cato Institute in Washington, D.C., Wyden said his staff was drafting legislation to restore “the balance necessary to protect individual rights” by requiring police to obtain a search warrant signed by a judge before obtaining location information.

Even though police are tapping into the locations of mobile phones thousands of times a year, the legal ground rules remain hazy, and courts have been divided on the constitutionality and legality of the controversial practice. In September, the first federal appeals court to rule on the legality indicated that no search warrant was needed, but sent the case back to a district judge for further proceedings.

Because the two-way radios in mobile phones are constantly in contact with cellular towers, service providers like AT&T and Verizon know–and can provide to police if required–at least the rough location of each device that connects to their mobile wireless network. If the phone is talking to multiple towers, triangulation yields a rough location fix. And, of course, the location of GPS-enabled phones can be determined with near-pinpoint accuracy.

Wyden said this kind of eerily accurate remote surveillance is akin to searching a person’s home, which requires probable cause and a search warrant signed by a judge. “You just can’t argue logically to me…that secretly tracking a person’s movements 24/7 is not a significant intrusion into their privacy,” he said.

The forthcoming legislation, he said, is being drafted with Rep. Jason Chaffetz (R-Utah), and will apply to “all acquisitions of geolocation information,” including GPS tracking devices that police are generally allowed to place on cars without warrants under current law.

It will address both law enforcement and intelligence investigations, including saying that Americans who are overseas continue to enjoy the same location-privacy rights, a nod to the debate a few years ago over rewriting federal wiretapping law. It will also extend the same privacy protections to both “real-time monitoring and acquisition of past movements.”

Not long ago, the concept of tracking cell phones would have been the stuff of spy movies. In 1998’s “Enemy of the State,” Gene Hackman warned that the National Security Agency has “been in bed with the entire telecommunications industry since the ’40s–they’ve infected everything”. After a decade of appearances in “24” and “Live Free or Die Hard”, location-tracking has become such a trope that it was satirized in a scene with Seth Rogen from “Pineapple Express” (2008).

In 2005, CNET disclosed that police were engaging in warrantless tracking of cell phones. In a subsequent Arizona case, agents from the Drug Enforcement Administration tracked a tractor trailer with a drug shipment through a GPS-equipped Nextel phone owned by the suspect. Texas DEA agents have used cell site information in real time to locate a Chrysler 300M driving from Rio Grande City to a ranch about 50 miles away. Verizon Wireless and T-Mobile logs showing the location of mobile phones at the time of calls became evidence in a Los Angeles murder trial.

Verizon Wireless, for instance, keeps phone records including cell site location for 12 months, a company official said at a federal task force meeting in Washington, D.C., last year. Phone bills without cell site location are kept for seven years, and SMS text messages are stored for only a very brief time. (A representative of the International Association of Chiefs of Police said yesterday that Verizon keeps incoming SMS messages for “only three to five days”.)

Wyden’s push to advance Fourth Amendment-like privacy protections through legislation is likely to be met with applause among technology firms. Last March, as CNET was the first to report, a group called the Digital Due Process coalition including Facebook, Google, Microsoft, Loopt, and AT&T as members endorsed the principle of location privacy. (Loopt says it already requires a search warrant before divulging location information.)

One of the coalition’s principles says: “A governmental entity may access, or may require a covered entity to provide, prospectively or retrospectively, location information regarding a mobile communications device only with a warrant issued based on a showing of probable cause.”

The Obama Justice Department, on the other hand, has argued that warrantless tracking is permitted because Americans enjoy no “reasonable expectation of privacy” in their–or at least their cell phones’–whereabouts. U.S. Department of Justice lawyers have argued in court documents that “a customer’s Fourth Amendment rights are not violated when the phone company reveals to the government its own records” that show where a mobile device placed and received calls.

Windows Phone 7 sales top 2 million

Microsoft says it has sold more than 2 million Windows Phone 7 devices since launch. That number represents handsets sold to mobile operators and retailers and not necessarily consumers.

The first initial report of Windows Phone 7 sales came from Microsoft in late December and topped 1.5 million units. Back then, Achim Berg, vice president of business and marketing for Windows Phone, said that number was “in line” with company expectations.

In a phone call with ZDNet Asia’s sister site CNET, Greg Sullivan, senior product manager for Windows Phone 7, said while sales were certainly a measure of the platform’s success, customer satisfaction and developer investment were more important leading indicators. And to that end, the company has been pleased.

“93 percent of Windows Phone customers are satisfied or very satisfied with Windows Phone 7, and 90 percent would recommend the phone to others,” Sullivan said. Those numbers were based on a recent survey of Windows Phone 7 customers numbering in the hundreds.

At the Consumer Electronics Show earlier this month, Microsoft CEO Steve Ballmer had articulated that people “fell in love” with Windows Phone 7 once they saw the device, and that getting it into the hands of consumers would be “job number one”. To that end, Sullivan said Microsoft is planning more marketing outreach.

“We’re absolutely doing things to turn people onto this great thing, that those who have experienced it, love,” he said. “You will see us continue to do some very visible things in terms of getting that word out, that–boy–once people use this phone, they fall in love with it very quickly.”

As for why Microsoft doesn’t have a more precise number on the actual number of handsets that have been sold to users, Sullivan noted that mobile operators were not contractually obligated to provide Microsoft with the activation numbers and the sell-through data. “We have a high degree of confidence in the precision of the sell-in numbers, which is why that’s what we’re providing,” he explained.

Sullivan said there are now more than 6,500 apps in Microsoft’s Marketplace application and the company currently has more than 24,000 registered developers. That’s compared to the 5,500 apps and 20,000 developers announced at CES earlier this month.

Microsoft plans to release the first of two announced software updates to Windows Phone 7 devices in what Sullivan said would be within “the next few months.” This first one will bring copy and paste functionality, along with better application loading performance and some bug fixes. The second update, planned for release in “the first half” of this year, will bring support for CDMA networks such as Sprint and Verizon, where Windows phones are currently unavailable.

Study: iOS, iPad gain enterprise computing share

Apple has said many times that the iPhone and iPad are gaining popularity with enterprise-level businesses. We’ve heard most recently that the iPad is either being used or tested for use at “more than 80 percent” of Fortune 100 companies, according to Apple COO Tim Cook. Today, a company that makes enterprise software is providing additional evidence that corporate customers are warming to the iPad, with details on which industries are embracing it already.

Good Technology makes enterprise software for mobile devices (Good For Enterprise), and over the last year has been tracking which devices its clients put its software on. Using data gleaned from more than 2,000 clients, Good found that during the fourth quarter of 2010, more than 65 percent of all activations using its software were on iOS devices–which means iPhones and iPads. iPad activations grew from 14 percent of all new devices to 22 percent of all new devices during that same time period.

The most activated devices Good saw during the quarter were, in order, iPhone 4, iPad, iPhone 3GS, Motorola Droid X, and Motorola Droid 2. Overall, Android phones remained about a third of new devices activated during the quarter, roughly the same as the previous three months, according to the study. For the first time, there were no Windows Mobile or Symbian devices in the top 10 most activated new devices, Good found.

It should be noted that Windows Phone 7 is not included since Good doesn’t support that platform yet, and all BlackBerry software is run off the BlackBerry Enterprise Server, so Good does not have access to data regarding activations of RIM’s smart phone devices.

We also get some detail on where the iPad is being used. Good found that the industry its customers are most using the iPad in are financial services, followed by health care, legal/professional services, high tech, government/public sector, and wholesale/retail.

Apple obviously has a head start in tablets since the iPad has been available since April 2010, but in the coming year it should have some competition. There are several Android tablets expected to be released this year, as well as WebOS tablets from Hewlett-Packard, which is a heavyweight when it comes to enterprise customers. But the biggest challenge for tablet adoption in enterprise is likely to come from RIM, which, as previously mentioned, won’t be included in Good’s numbers. The PlayBook is expected to go on sale this year as a companion device to the BlackBerry, which has been long-entrenched in the corporate world.

S’pore may auction 4G spectrum in 2012

The Singapore government intends to auction off 4G wireless spectrum rights as early as next year, paving the way for a faster rollout of Long Term Evolution (LTE) in the country.

According to local reports, ICT regulator the Infocomm Development Authority of Singapore (IDA) announced Monday it would avail six lots of spectrum for service providers to implement high-speed mobile data services. 4G is said to offer speeds at five to 10 times faster than the existing 3G technology.

Currently, SingTel, StarHub, M1, QMax and PacketOne have the rights to use the 2.3/2.5 GHz spectrum, which the service providers successfully bid for in 2005. These rights will expire in 2015, after which the spectrum will be dedicated exclusively for the deployment of 4G services, said the IDA.

In the meantime, operators can seek approval from the government to deploy LTE with their existing spectrum rights in the 900/1800 MHz and 2.3/2.5 GHz bands.

Operators quoted in the reports did not specify when LTE services will be made commercially available. SingTel, StarHub and M1 have conducted or have o