Archive for the ‘Latest Techguide’ Category

Latest Technologies guide news

Posted: December 13, 2010 in Latest Techguide

5 tips for learning how to use Server Core

As organizations work to increase the density of the virtual servers running on their host servers, many are turning to Server Core deployments.

Server Core lacks a lot of the GUI features found in more traditional Windows Server deployments. It’s a lightweight server operating system, which makes it ideal for use in virtual data centers. Even so, there’s no denying that Server Core can be a bit intimidating and that a learning curve is associated with managing Server Core operating systems.

In this article, I will provide five tips for learning how to use Server Core.

1: Set up a lab machine
Without a doubt, the best advice I can give you is to set up a few lab machines and install Server Core. That way, you can experiment with configuring and managing the operating system without having to worry about harming your production systems.

As you do, don’t be afraid to get your hands dirty. The deeper you dig into Server Core on your lab machines, the better equipped you will be to manage Server Core deployments in the real world.

2: Understand the difference between the command line and PowerShell
I have read several blog posts that have incorrectly reported that administrators must use PowerShell cmdlets to manage Server Core operating systems. Although Server Core is managed from the command line, there is a difference between the command line and PowerShell.

The command line traces its roots back to DOS and has existed in one form or another in every version of Windows ever released for the X86 / X64 platform. Although some command -ine commands will work in PowerShell, PowerShell commands will not work in a command-line environment.

The command line is the primary interface for managing Server Core. In fact, PowerShell isn’t even natively supported on Windows Server 2008 Server Core servers (although there is an unofficial workaround that can be used to add PowerShell support). PowerShell is natively available on Server Core servers that are running Windows Server 2008 R2, but it’s not installed by default. Microsoft Support provides instructions for enabling PowerShell.

3: Check out the available graphical utilities
Even though the whole point of Server Core is that it’s supposed to be a lightweight server OS without a GUI, it actually does have a GUI. Several graphical utilities can help you with the initial server configuration process.

The best of these utilities (in my opinion) is Core Configurator 2.0, an open source utility that’s available as a free download. It’s designed to help you to do things such as naming your server, configuring its network settings, and licensing the server.

In addition, Microsoft includes a configuration utility called Sconfig with Windows Server 2008 R2. Simply enter SCONFIG.CMD at the command prompt, and Windows will launch the Server Configuration utility. This utility is similar to the Core Configurator, but its options aren’t quite as extensive. The Server Configuration utility will help you to do things like joining a domain or installing updates.

4: Don’t forget about graphical management tools
When you manage a normal Windows 2008 server, you use built-in management utilities, such as the Active Directory Users And Computers Console and the Service Control Manager. Although such utilities connect to the local server by default, they’re designed to let you manage other servers on your network, including servers that are running Server Core.

Even though Server Core operating systems don’t come with a comprehensive suite of management utilities, there is absolutely nothing stopping you from connecting to a core server from another server’s management consoles and managing that core server in exactly the same way that you would if it were running a graphical Windows Server operating system.

5: Learn Server Core’s limitations
Because Server Core is a lightweight server operating system, it’s not suitable for all purposes. Plenty of third-party applications simply will not run on a Server Core deployment.

In addition, many of the roles and role services that are often run on traditional Windows Server 2008 R2 servers are not supported on Server Core deployments. The actual roles that are supported by Server Core vary depending on the edition of Windows you are installing.

For instance, Windows Server 2008 R2 Web Edition supports only three roles, while the Datacenter and Enterprise Editions support 11 roles:

  • Active Directory Certificate Services
  • Active Directory Domain Services
  • Active Directory Lightweight Directory Service
  • BranchCache Hosted Cache
  • DHCP Server
  • DNS Server
  • File Services
  • Hyper-V
  • Media Services (this role must be downloaded separately)
  • Print Services
  • Web Services (IIS)

Microsoft provides a full list of the roles that are supported by the various editions of Windows Server 2008 R2.

Brien Posey is a seven-time Microsoft MVP. He has written thousands of articles and written or contributed to dozens of books on a variety of IT subjects.

Obtaining network information with netstat

One of the best utilities on Linux for network troubleshooting is a very simple one: netstat.

Netstat can provide a lot of information, such as network connections, routing tables, interface statistics, and more. It displays information on various address families, such as TCP, UDP, and UNIX domain sockets.

Of course, all of this can also make it a daunting tool to use if you have never used it before.

While netstat is useful as a regular user, to get the most out of it, it will need to be run by the root user. For instance, to determine what program is listening to a port or socket (the -p switch), you must have sufficient root privileges.

To see all of the TCP ports being listened to on the system, and by what program, use:

# netstat -l --tcp -p
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address    State       PID/Program name
tcp        0      0 *:ssh                       *:*                LISTEN      1666/sshd
tcp        0      0 localhost.localdomain:smtp  *:*                LISTEN      1841/sendmail: acce
tcp        0      0 *:mysql                     *:*                LISTEN      1807/mysqld
tcp        0      0 *:http                      *:*                LISTEN      1873/httpd
tcp        0      0 *:https                     *:*                LISTEN      1873/httpd

From the above, you can see that sshd is listening to port 22 (netstat will display the port name from /etc/services unless you use the “-n” switch), on all interfaces. Sendmail is listening to port 25 on only the loopback interface (127.0.0.1), and Apache is listening to ports 80 and 443, while MySQL is listening to port 3306 on all available network interfaces. This gives you an idea of what services are running, and what ports they are listening to; this is one way to determine if something is running that shouldn’t be, or isn’t running when it should be.

The same can be done for UDP, again, to make sure that nothing is listening for active connections that shouldn’t be:

# netstat -l --udp -p -n
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address    State       PID/Program name
udp        0      0 0.0.0.0:68                  0.0.0.0:*                      1292/dhclient
udp        0      0 192.168.250.52:123          0.0.0.0:*                      1679/ntpd
udp        0      0 127.0.0.1:123               0.0.0.0:*                      1679/ntpd
udp        0      0 0.0.0.0:123                 0.0.0.0:*                      1679/ntpd
udp        0      0 0.0.0.0:42022               0.0.0.0:*                      1292/dhclient
udp        0      0 ::1:123                     :::*                           1679/ntpd
udp        0      0 fe80::226:18ff:fe7b:123     :::*                           1679/ntpd
udp        0      0 :::123                      :::*                           1679/ntpd
udp        0      0 :::15884                    :::*                           1292/dhclient

As you can see from the above, netstat will display anything listening to IPv4 or IPv6 addresses.

Netstat isn’t restricted to telling you what is listening to ports; it can also tell you active connections, like this:

# netstat --tcp -p
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address             Foreign Address        State  PID/Program name
tcp   0      0 wrk.myhost.com:53231    wrk2.myhost.com:ssh         ESTABLISHED 3333/ssh
tcp   0      0 wrk.myhost.com:44401    iy-in-f113.1e100.net:http   TIME_WAIT   -
tcp   1      0 wrk.myhost.com:51848    204.203.18.161:http         CLOSE_WAIT  2729/clock-applet
tcp   0      0 wrk.myhost.com:821      srv.myhost.com:nfs          ESTABLISHED -
tcp   0      0 wrk.myhost.com:59028    iy-in-f101.1e100.net:http   TIME_WAIT   -
tcp   0      0 wrk.myhost.com:37120    dns.myhost.com:ldap         ESTABLISHED 1658/sssd_be
tcp   0      0 wrk.myhost.com:ssh      laptop.myhost.com:52286     ESTABLISHED 3274/sshd: joe [

From the above, you can see that the first connection is an outbound SSH connection (originating from port 53231, destined for port 22). You can also see some outbound HTTP connections from the GNOME clock-applet, as well as outbound authentication requests from SSSD, and outbound NFS. The last entry shows an inbound SSH connection.

The -i switch provides a list of network interfaces and the number of packets transmitted:

# netstat -i
Kernel Interface table
Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0       1500   0    60755      0      0      0    40332      0      0      0 BMRU
lo        16436   0      149      0      0      0      149      0      0      0 LRU

An interesting “watchdog” use of netstat is with the -c switch, which will print a continuous listing of whatever you have asked it to display, refreshing every second. This is a good way to observe changes that are happening (connections being opened, etc.).

Finally, you can use netstat in place of other commands: netstat -r shows a kernel routing table, similar to route -n and netstat -ie shows interface information identical to ifconfig.

Netstat can provide a lot of information that can be very useful in tracking down various network related problems, or just to keep an eye on the system, making sure that no unauthorized programs are listening for incoming network connections.

Keep in mind that netstat tells you what is actively listening or connected; it cannot tell you if a firewall is blocking that port. So while a service might be noted as listening, it may not actually be accessible. Netstat doesn’t provide the entire picture, but it can certainly help provide useful clues.

Vincent Danen works on the Red Hat Security Response Team and lives in Canada. He has been writing about and developing on Linux for over 10 years.

Social media a double-edged sword for SMBs

With small and midsize businesses (SMBs) accounting for over 90 percent of the Philippine economy, technology–in particular, social media–is seen as a valuable tool to democratize the playing field and make it easier for local companies to compete with the industrial giants.

Adopting social networks is ideal for local SMBs since the Philippines now boasts the highest usage of online social activities in the Asia-Pacific region, according to online analyst comScore.

“Social media is a powerful tool and with great power comes great responsibility.” 

— Joey Alarilla
Yahoo Southeast Asia

Although it is second to Indonesia in the region in terms of Facebook user base, the Philippines has the highest penetration rate of social media users with 90.3 of the country’s Web population owning a Facebook account.

But, experts warned that social media could become a double-edged sword if deployed by an overzealous company that does not have a proper strategy in place.

“Social media is not a silver bullet. It won’t magically transform your company,” said Manila-based Joey Alarilla, head of social content strategy for Yahoo Southeast Asia. “If your product or service isn’t good and there are no efforts to improve it, social media will only highlight your inadequacies and annoy your customers.”

In an e-mail interview, Alarilla explained that entrepreneurs must stick to their business objectives and remember the purpose of establishing conversations with their online audience.

“Business owners should avoid saying anything they may regret online,” he said. “Social media is a powerful tool and with great power comes great responsibility. We’ve seen many cautionary tales of businesses that have suffered public embarrassment and backlash against their brands when conversations become too heated.”

Plunging headlong into the social Web without preparation, according to Filipino social media guru, Sonnie Santos, could also result in the miscommunication of the company’s message to its target market.

The tell-tale signs of an ill-conceived social media strategy include having unclear or no rules on online engagement, untrained employees, and the lack of a point-person to support a social Web campaign, Santos said.

However, he warned that SMBs would also be missing out on the market of young professionals if they ignore social media as a communication tool. “[But] if used ignorantly, resources are wasted, productivity is lost, and online reputation can be damaged by employees who use the tool without proper guidance,” he added.

Despite the potential pitfalls, embracing the Web as well as a social media policy will likely prove to be the cheapest and most effective way for SMBs to expand their footprint.

Timothy Birdsall, director of Lotus software at IBM Asia-Pacific, which recently launched a social media-enhanced messaging suite in the Philippines, said resource-strapped SMBs will only need to invest in the initial setup to roll out a social media strategy.

Birdsall added: “All they need to do is to create a profile. Put that out, together with their capabilities, and people will find them online. The savings will be infinite.”

Santos, however, recommended that SMBs should also hire a consultant to craft their online philosophy and social Web policy, as well set up the site’s integration with other social media accounts.

Blending social with existing channels
Alarilla noted that social media should also be “part of a 360-degree marketing campaign” so that it complements the company’s online display advertising, search marketing, events and print advertising.

“For SMBs, social media is great at bringing you to where your customers are, and giving your company a human face as you engage them on social networks,” he said.

As social media is no panacea, it should only be deployed by SMBs in areas where it can be used as an effective and measurable digital marketing tool.

Santos highlighted relevant departments within an organization that should use the social Web: marketing; customer service and relations; human resources to support recruitment, training, corporate communications and employee engagement; and operations, which is applicable only in certain industries.

He added that the level of engagement would depend on the nature of business and target market. “B2Cs (business-to-consumers) should employ a deeper level of engagement, while B2Bs (business-to-business) should use social Web primarily to manage their online reputation,” he said.

Alarilla suggested that rather than formulate their social media strategies from scratch, SMBs could explore social media platforms that have been built specifically for their needs.

For instance, he explained that Yahoo currently has a location-based social networking site in Indonesia called, “Koprol for Business”. The service has been designed for SMBs to create self-managed business listings and targets users who are in the vicinity of their business to improve their chances of engaging with customers, he said.

He added that social media can only be effective if every stakeholder within the enterprise embraces it.

“It should transform your company’s internal processes and break down silos,” he said. “Ideally, every employee should become a social media evangelist for the company, just as your company’s goal is to turn your users into your brand advocates.”

“Prior to plunging into social media, companies should manage expectations and make stakeholders realize that social media is not a sprint, but a marathon that should be part of a long-term business strategy and overall communication plan,” he concluded.

Melvin G. Calimag is a freelance IT writer based in the Philippines.

Avoid getting buried in technical debt

In an experience report on the benefits of object-orientation for OOPSLA ’92, Ward Cunningham observed:

Another, more serious pitfall is the failure to consolidate. Although immature code may work fine and be completely acceptable to the customer, excess quantities will make a program unmasterable, leading to extreme specialization of programmers and finally an inflexible product. Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object- oriented or otherwise.

Cunningham’s debt metaphor has since become popularized as “technical debt” or “design debt”. The analogy applies well on several levels.

Debt service. Steve McConnell points out that where there’s debt, there’s interest. Not only do you eventually have to implement a correct replacement (paying back the principle), but in the mean time, you must work around what you implemented incorrectly (that’s the interest). Doing it wrong takes up more time in the long run, although it might be faster in the short term.

Deficit coding. Some organizations treat technical debt similarly to how some governments and individuals treat fiscal debt: they ignore it and just keep borrowing and spending. In terms of technical debt, that means ignoring the mistakes of the past and continuing to patch new solutions over old problems. Eventually, though, these patches take longer and longer to implement successfully. Sometimes, “success” gets redefined in terms of the minimum required to get by. More significantly, the entire system becomes more brittle. Nobody fully comprehends all of its dependencies, and even those who come close can’t make major changes without breaking things. Users begin to wonder why problems take so long to get fixed (if they ever do) and why new problems arise with every release.

Write-offs. You always have the alternative of declaring technical bankruptcy. Throw out the project and start over. As in the financial world, though, the consequences of that decision aren’t trivial. You can lose a lot of credit with users and supporters during the interim when you don’t have a product. Furthermore, a redesign from the ground up is a lot more work than most people realize, and you have to make sure it’s done right. The worst possible scenario would be to spend millions of dollars, years of effort, and end up with only a newer, shinier pile of technical debt. The very fact that you’re considering that kind of drastic measure indicates strongly against your success: the bad habits that got you here have probably left thousands of critical system requirements completely undocumented. Good luck discovering those before you ship something to customers.

It’s not all bad. Strategic debt can leverage finances, and the same holds true in the technical world. Sometimes you need to get to market more quickly than you can do it the right way. So, you make a strategic decision to hack part of the system together, with a plan to go back later and redesign that portion. The key here is that you know that you’re incurring a debt, and it’s all part of a plan that won’t allow that debt to get out of control. It’s intentional, not accidental.

That’s the main benefit of using the technical debt metaphor: awareness. Too often, after a particularly bloody operation on a piece of unmaintainable code, a developer will approach his or her manager with “We really need to rewrite this module”, only to be brushed off with “Why? It’s working now, isn’t it?” Even if the developer possesses the debating skills necessary to point out that all subsequent changes to this code would benefit from taking some time now to refactor it, the manager would rather take chances on the future, because “we’ve got enough on our plate already”.

By framing the problem in terms of the debt metaphor, its unsustainability becomes clear. Most professionals can look at a balance sheet with growing liabilities and tell you that “somethin’s gotta change”. It isn’t always so apparent when you’re digging a similar hole technically.

Chip Camden has been programming since 1978, and he’s still not done. An independent consultant since 1991, Chip specializes in software development tools, languages, and migration to new technology.

Disable UAC for Windows Servers through Group Policy

User Account Control (UAC) is a mechanism in Windows Server 2008, Windows Server 2008 R2, Windows 7, and Windows Vista that provides interactive notification of administrative tasks that may be called by various programs. Microsoft and non-Microsoft applications that are installed on a server will be subject to UAC. The most visible indicator that UAC is in use for a file is the shield ribbon identifier that is put on a shortcut (Figure A).

Figure A

Windows Server 2008 and Windows 7’s UAC features are good, but I don’t feel they are necessary on server platforms for a general-purpose system. The solution is to implement three values in a Group Policy Object (GPO) that will configure the computer account to not run UAC. These values are located in Computer Configuration | Policies | Windows Settings | Security Settings | Local Policies | Security Options with the following values:

  • User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode
  • User Account Control: Detect application installations and prompt for elevation
  • User Account Control: Turn on Admin Approval Mode

These values are set to Elevate Without Prompting, Disabled, and Enabled respectively to turn off UAC for computer accounts. This GPO is shown in Figure B with the values set to the configuration elements.

Figure B

Click the image to enlarge.

In the example, the GPO is named Filter-GPO-ServerOS to apply a filter by security group of computer accounts. (Read my TechRepublic tip on how to configure a GPO to be applied only to members of a security group.) A good practice would be to apply the GPOs to a security group that contains server computer accounts, and possibly one for select workstation accounts. This value requires a reboot to take effect via Group Policy. Also, the UAC shield icon doesn’t go away, but subsequent access to the application doesn’t prompt for UAC anymore.

I know some server admins are fans or UAC, while others prefer to disable the feature. Do you disable UAC? Share your perspective on this feature.

Tips and tricks to help you do more with OpenSSH

Previously, we looked at the basics of key management in OpenSSH, which in my opinion, really need to be understood before you start to play with all the other fine trickery OpenSSH offers. Key management is important, and easy, and now that we all understand how to manage keys, we can get on with the fun stuff.

Because I take OpenSSH for granted, I don’t really think about what I do with it. So here are some pointers and tips to various SSH-related commands that can make life easier, more secure, and hopefully better. This really is just the tip of the iceberg; there is so much more that OpenSSH can do, but I hope this at least gives you some new tricks and inspires some further investigation.

Running remote X applications
If you want to run a remote X11 program locally, you can do that via OpenSSH, taking advantage of its encryption benefits. With X running, open a terminal and type:

$ ssh -fX user@host firefox

This will fire up FireFox on the remote computer, and display the output over an encrypted SSH connection on the local display. You will need X11Forwarding yes enabled on the remote server (usually it is; if not, check /etc/ssh/sshd_config or /etc/sshd_config).

Easy connections to remote using screen
When you first log into a system and run screen, you have multiple terminals open that can be switched around. If you need to disconnect from the system, have a network outage, or switch from one wireless network to another, running the remote session under screen will prevent whatever processes are currently running from terminating prematurely. However, when you do run screen like this, typically you would log in directly and then start, or resume, screen.

Instead, you can do this with one command, which has the advantage of logging you out immediately when disconnecting from the screen:

$ ssh -t user@host screen -r

This also has the benefit of not starting an extra shell process just to launch screen. This will not work, however, if screen is not already running on the remote host.

Also note that you can run almost any command remotely, like this. The -t command forces pseudo-tty allocation, so you can use this to run simple commands, or interactive commands like a MySQL client login, or alternatives to screen like tmux.

Encrypted tunnels to remote hosts
This is one of the best uses of OpenSSH. With tunneling, you can tell OpenSSH to create a tunnel to a port on the remote server, and connect to it locally. For instance, if you run a private web server, where port 80 is not available to the internet (via a firewalled port), you can use the following to connect to it:

$ ssh -N -L8080:127.0.0.1:80 user@remotehost

Then point your browser to http://127.0.0.1:8080 and it will connect to port 80 on remotehost, through the SSH tunnel. Keep in mind that, for web connections at least, it will only connect to an IP, so name-based virtual hosting is out, or at least reaching a name-based virtual host would be.

On the other hand, if you have a MySQL service or some other firewalled service, you can use the same technique to get to that service as well. If you wanted to connect to MySQL on remotehost you might use:

$ ssh -N -L3306:127.0.0.1:3306 remotehost

Then point your MySQL client application to the localhost (127.0.0.1) and port 3306. The general syntax of the -L command is “local_port:local_ip:remote_port”.

Creating a SOCKS5 proxy
One really neat thing OpenSSH can do is create a SOCKS5 proxy, which is a direct connection proxy. This allows you to tunnel all HTTP requests, or any other kind of traffic that can be sent through a SOCKS5 proxy, via SSH through a server you can access. This might be useful at a coffee shop, for instance, where you want to direct all HTTP traffic through your SSH proxy to your system at home or the office, in order to avoid potential snooping or data theft (looking directly at you, FireSheep).

The command I use to create the SOCKS5 proxy using OpenSSH is:

$ ssh -C2qTnNM -D 8080 user@remotehost

This creates a compressed connection that forces pseudo-tty allocation, as well as places the ssh client into master mode for connection sharing (see man ssh for more details on the other options). The proxy will live on port 8080 of the local host. A quick test is to use something like curl with whatismyip.com:

$ curl --socks5 127.0.0.1:8080 www.whatismyip.com/automation/n09230945.asp

Call curl with that command, then compare it to using curl on that URL directly and you should see two different IP addresses –the first being the remote server’s IP, and the second being your own.

Since curl is really only useful for testing, check out FoxyProxy for Firefox in order to make Firefox use the proxy.

These are just a few things that OpenSSH can do, but I think they’re very useful. OpenSSH truly is a ubiquitous Swiss-Army knife utility; it is pre-installed and available on pretty much every major operating system with the exception of Windows. It may be intimidating if you’re just figuring it out for the first time, but spend some time playing with it and that investment will definitely pay off.

Avoid getting buried in technical debt

In an experience report on the benefits of object-orientation for OOPSLA ’92, Ward Cunningham observed:

Another, more serious pitfall is the failure to consolidate. Although immature code may work fine and be completely acceptable to the customer, excess quantities will make a program unmasterable, leading to extreme specialization of programmers and finally an inflexible product. Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object- oriented or otherwise.

Cunningham’s debt metaphor has since become popularized as “technical debt” or “design debt”. The analogy applies well on several levels.

Debt service. Steve McConnell points out that where there’s debt, there’s interest. Not only do you eventually have to implement a correct replacement (paying back the principle), but in the mean time, you must work around what you implemented incorrectly (that’s the interest). Doing it wrong takes up more time in the long run, although it might be faster in the short term.

Deficit coding. Some organizations treat technical debt similarly to how some governments and individuals treat fiscal debt: they ignore it and just keep borrowing and spending. In terms of technical debt, that means ignoring the mistakes of the past and continuing to patch new solutions over old problems. Eventually, though, these patches take longer and longer to implement successfully. Sometimes, “success” gets redefined in terms of the minimum required to get by. More significantly, the entire system becomes more brittle. Nobody fully comprehends all of its dependencies, and even those who come close can’t make major changes without breaking things. Users begin to wonder why problems take so long to get fixed (if they ever do) and why new problems arise with every release.

Write-offs. You always have the alternative of declaring technical bankruptcy. Throw out the project and start over. As in the financial world, though, the consequences of that decision aren’t trivial. You can lose a lot of credit with users and supporters during the interim when you don’t have a product. Furthermore, a redesign from the ground up is a lot more work than most people realize, and you have to make sure it’s done right. The worst possible scenario would be to spend millions of dollars, years of effort, and end up with only a newer, shinier pile of technical debt. The very fact that you’re considering that kind of drastic measure indicates strongly against your success: the bad habits that got you here have probably left thousands of critical system requirements completely undocumented. Good luck discovering those before you ship something to customers.

It’s not all bad. Strategic debt can leverage finances, and the same holds true in the technical world. Sometimes you need to get to market more quickly than you can do it the right way. So, you make a strategic decision to hack part of the system together, with a plan to go back later and redesign that portion. The key here is that you know that you’re incurring a debt, and it’s all part of a plan that won’t allow that debt to get out of control. It’s intentional, not accidental.

That’s the main benefit of using the technical debt metaphor: awareness. Too often, after a particularly bloody operation on a piece of unmaintainable code, a developer will approach his or her manager with “We really need to rewrite this module”, only to be brushed off with “Why? It’s working now, isn’t it?” Even if the developer possesses the debating skills necessary to point out that all subsequent changes to this code would benefit from taking some time now to refactor it, the manager would rather take chances on the future, because “we’ve got enough on our plate already”.

By framing the problem in terms of the debt metaphor, its unsustainability becomes clear. Most professionals can look at a balance sheet with growing liabilities and tell you that “somethin’s gotta change”. It isn’t always so apparent when you’re digging a similar hole technically.

Chip Camden has been programming since 1978, and he’s still not done. An independent consultant since 1991, Chip specializes in software development tools, languages, and migration to new technology.

10 things to look for in a data center

Everyone’s going to the cloud. The cloud’s all the rage. Almost no IT discussion is complete without mentioning “the cloud”. But when it comes down to it, the cloud is nothing more than systems hosting information in a data center somewhere “out there”.

Organizations have discovered the benefits of offloading infrastructure development, automatic failover engineering, and multiple coordinated power feeds, not to mention backups, OS maintenance, and physical security, to third-party data centers. That’s why “going to the cloud” ultimately makes sense.

Unfortunately, not every data center is ready for prime time. Some have sprung up as part of a cloud-based land grab. Review these 10 factors to ensure that your organization’s data center is up to the task.

1: Data capacity
Data centers are typically engineered to support mind-boggling data transmission capacities. Some feature multiple OCx and SONET connections that can manage Amazon.com-like Web site demands. Other less sophisticated entities might try getting by using redundant T-3s. Don’t find out the hard way that your data center provider failed to adequately forecast capacity and can’t quickly scale.

2: Redundant power
Many data centers have online electrical backups. UPSes, in other words. If your organization maintains business-critical systems that simply can’t go down, be sure that the data center has a second electrical backbone connection. Only N+1 power grid connectivity, to a secondary electrical source, can help protect against catastrophe.

3: Backup Internet
Just as any quality data center will maintain redundant power sources, so too must it maintain secondary and tertiary Internet connectivity. Buried cables get cut. Overhead cables fall when trucks strike poles. Vendors experience network-wide outages. Only by making sure that multiple tier-1 Internet provider circuits feed a facility via fully meshed backbones can IT managers rest assured they’ve done what they can to eliminate potential downtime.

4: Automatic hardware failover
Redundant power, Internet, and even heating and cooling systems are great, but if they’re not configured as hot online spares, downtime can still occur. It’s critical that data centers employ redundant online switches, routers, UPSes, and HVAC equipment that automatically fail over when trouble arises.

5: Access control
The importance of physical security can’t be understated. Commerce could be significantly affected if just one unstable individual were able to drive a large vehicle into a busy and sensitive data center. That’s why it’s important that a data center’s physical perimeter be properly protected. In addition to physical access controls (keys, scanner cards, biometric devices, etc.), care must be taken to ensure that, should someone gain access to a data center, individually leased sections remain secure (thanks to additional physical access controls, locks, cages, rooms, etc.).

6: 24x7x365 support
Data centers must be staffed and monitored by properly trained technicians and engineers at all times. It’s an unfortunate byproduct of today’s pressurized business environment but a fact nevertheless. Systems can’t fail. Constant monitoring and maintenance is a must. Certainly, many data centers will run leaner shifts during off hours, but telephone support and onsite assistance must be always available. Further, data center services must include customer reporting tools that assist clients in understanding a center’s network status.

7: Independent power
Data centers must have redundant electrical grid connections. That’s a given. And facilities must also maintain their own independent power supply. Most turn to onsite diesel generators, which need to be periodically tested to ensure that they can fulfill a data center’s electrical requirements in case of a natural disaster or episode that disrupts the site’s other electrical sources.

8: In-house break/fix service
One of the benefits of delegating services to the cloud is eliminating the need to maintain physical and virtualized servers. OS maintenance, security patching, and hardware support all become the responsibility of the data center. Even if an organization chooses to co-locate its own servers within a data center, the data center should provide in-house staff capable of maintaining software and responding to hardware crises.

9: Written SLAs
Any data center contract should come complete with a specifically worded service level agreement (SLA). The SLA should guarantee specific uptime, service response, bandwidth, and physical access protections, among other elements. Ensure, too, that the SLA or terms of service state what happens if a data center fails to provide uptime as stated, maintenance or service as scheduled, or crisis response within stated timeframes.

10: Financial stability
All the promises in the world, and even an incredibly compelling price, mean nothing if the data center fails. Before moving large amounts of data and equipment into a facility, do some homework on the company that owns the site. Confirm that it’s free and clear of lawsuits, has adequate operating capital, and isn’t in financial straits. The last thing you want to do is have to repeat the process because a center fails financially or must cut costs (and subsequently service and capacity) to stay afloat.

Erik Eckel owns and operates two technology companies. In addition to serving as a managing partner at Louisville Geek, which specializes in providing cost-effective technology solutions to small and midsize businesses, he also operates Eckel Media Corp.

Use a temporary default value to streamline data entry in Access

Microsoft Access


Use a temporary default value to streamline data entry in Access

There are a lot of opportunities for reducing data entry, but here’s one you might not have considered–entering temporary default values.

Doing so will reduce keystrokes when records share the same value such as the same zip code, the same city, the same customer, and so on, but that shared value changes from time to time.

This situation probably arises more than you realize. For instance, suppose a data entry operator enters orders processed by sales personnel who support specific ZIP codes, cities, or regions. The data entry operator knows that each order in a specific pile will have the same ZIP code, city, and so on.

Or, perhaps your data entry operator receives piles of work order forms from service managers who service only one company. In that case, every form in the pile will share the same customer value.

When a data entry operator enters several records with the same value one after the other, you can ease the data entry burden just a bit, by making that related value the default value for that field–temporarily. That way, the operator doesn’t have to re-enter the value for each new record–it’s already there!

Setting up this solution is easier than you might think–it takes just a bit of code in the control’s AfterUpdate event. Using the example form below, we’ll use this technique to create temporary defaults for three controls named txtCustomer, txtCreatedBy, and dteSubmittedDate. At the table level, the SubmittedDate field’s default value is Date(). (You can work with most any form, just be sure to update the control names accordingly.)

To add the event procedures for the three controls, open the form in Design View and then click the Code button in the Tools group to open the form’s module. Enter the following code:

Private Sub dteSubmittedDate_AfterUpdate(Cancel As Integer)
  'Set current date value to default value.
  dteSubmittedDate.DefaultValue = Chr(35) & dteSubmittedDate.Value & Chr(35)
  Debug.Print dteSubmittedDate.DefaultValue
End Sub
Private Sub txtCreatedBy_AfterUpdate(Cancel As Integer)
  'Set current value to default value.
  txtCreatedBy.DefaultValue = Chr(34) & txtCreatedBy.Value & Chr(34)
End Sub
Private Sub txtCustomer_AfterUpdate(Cancel As Integer)
  'Set current customer value to default value.
  txtCustomer.DefaultValue = Chr(34) & txtCustomer.Value & Chr(34)
End Sub

When you open the form in Form view, the AutoNumber field will display (New) and the Submitted Date control will display the current date. Enter a new record. Doing so will trigger the AfterUpdate events, which will use the values you enter as the default values for the corresponding controls:

  • ABC, International is now the default value for txtCustomer.
  • Susan Harkins is now the default value for the txtCreatedBy.
  • 1/20/2011 is now the default value for the dteSubmittedDate. The default value was generated by Date().

When you click the New Record button, the newly-set default values automatically fill in the controls. The only value the data entry operator has to enter is the service code.

That means that the data entry operator can bypass three controls for each new record until until a value changes. For instance, when the data entry operator moves on to the stack of order forms for RabbitTracks, he or she will update the Customer, CreatedBy, and SubmittedDate value for the first record in that batch. Doing so will reset the temporary-default values. That’s why this is such a useful technique for batch input–as the operator works through the pile of forms, the default values update to match the new input values, automatically.

It’s important to remember that this code updates the control’s default value property at the form level. This form-level setting takes precedence over a table-level equivalent. However, it does not overwrite the table property. If you delete the form-level setting, the table-level property kicks right in.

When you close the form, it saves the temporary default value. Consequently, when you next open the form, it will use the last set of default values. If you want the form to clear these properties from session to session, add the following code to the form’s module:

Private Sub Form_Open(Cancel As Integer)
  'Set Default Value properties to nothing.
  dteSubmittedDate.DefaultValue = vbNullString
  txtCreatedBy.DefaultValue = vbNullString
  txtCustomer.DefaultValue = vbNullString
End Sub

When you open the form, the Open event will clear the three previously-set default values. That means that txtCustomer and txtCreatedBy will be blank and dteSubmittedDate will display the current date (the result of Date(), the field’s table-level Default Value setting).

This technique might not seem like much to you. But, some users spend a lot of time entering data, so anything you can do to eliminate even a few steps will be a welcome enhancement.

Microsoft Word


Add line numbers to a Word document

It isn’t often that we need to number lines in a Word document, but the need does arise occasionally. For instance, developers and programmers often display line numbers with code.

Of course, you’re not writing code in a Word document, but you might insert code into the middle of a technical document. If you do, you might just want to include line numbers for that code. Regardless of why you want line numbers, the surprising fact is that Word will comply and without much effort on your part.

The easy part is enabling the feature, as follows:

2003 2007/2010
 

  1. From the File menu, choose Page Setup.
  2. Click the Layout tab.
  3. Click Line Numbering (at the bottom).
  4. Check the Add Line Numbering option.
  5. Check the appropriate options.

 

 

  1. Click the Page Layout tab.
  2. In the Page Setup group, click Line Numbers.
  3. Choose the appropriate option, such as Continuous.

 

There’s a little bit of version confusion, but it’s a small obstacle. By default, Word 2003 begins numbering at the beginning of the document and restarts numbering with each new page and of course, you can change those settings. In Word 2007 and 2010, enabling the feature includes specifying how to number each new page or section. It’s not so different, it only seems a bit different at first.

Once you add line numbers, you’re not stuck with them, not strictly speaking any way. To suppress line numbering (in Word 2003) for a section of text, right-click the selected text and choose Paragraph from the resulting context menu. Click the Line and Page Breaks tab and check the Suppress Line Numbers option and click OK.

Similarly, Word 2007 and 2010 let you suppress specified areas. Simply select the paragraph(s) in question and choose Suppress For Current Paragraph from the Line Numbers option in the Page Setup group.

By default, the feature begins with 1 and increments by one. You can start with a different number and you can change the increment value. To do so, change the options via the Line Numbers dialog, as follows:

2003 2007/2010
 

  1. From the File men, choose Page Setup.
  2. Click the Layout tab.
  3. Click Line Numbering (at the bottom).
 

  1. Click the Page Layout tab.
  2. In the Page Setup group, click Line Numbers.
  3. Choose Line Numbering Options.

To change the first number, edit the Start At value. The From Text option lets you determine the space between the number and the text. By changing the Count By value, you can change the increment value. The Numbering options are self-explanatory–you can restart numbering at the beginning of each new page or each new section.

You might never need this feature, but if the need arises, you’ll be able to say Yes! I can do that for you!

Microsoft Excel


Custom sorting in Excel

Sorting is a common task, but not all data conforms to the familiar ascending and descending rules. For example, months don’t sort in a meaningful way when sorted alphabetically. In this case, Excel offers a custom sort.

Before we look at a custom sort for months, let’s review the problem months presents to normal sorting practices. Below, you can see the problem. When applying an ascending sort, the list sorts alphabetically instead of sorting by month order.

If you want an alphabetic sort, this works great. I’m betting that most of the time, this won’t be the results you want. You could use an expression that returns a value equal to the order of each month and sort by its results,  but it’s unnecessary as there’s a built-in sort just for months. To apply this custom sort, do the following (in Excel 2003):

  1. Select the month names. In this case, that’s A2:A13.
  2. Choose Sort from the Data menu.
  3. The resulting dialog box anticipates the custom sort. The Sort By control displays Month with an Ascending sort. If you click OK,  Excel will sort the selected months in alphabetic order.
  4. Click the Options button at the bottom of the dialog box.
  5. In the resulting dialog box, the First Key Sort Order control displays Month. Click the dropdown arrow to display four custom sort options.
  6. Choose the last option, January, February, March, and so on. By default, a custom sort isn’t case-sensitive, but there’s an option to make it so, if you need it.
  7. Click OK twice and Excel sorts the months in the familiar way you expect.

Excel 2007 and 2010 offer the same flexible custom sort, but getting there’s a bit different:

  1. Click the Sort option in the Sort & Filter group. (Don’t click the A to Z or Z to A sort icons, the ones with the arrows.)
  2. In the resulting Sort dialog box, click the Order control’s dropdown list and choose the appropriate custom sort.
  3. Click OK.

When using a custom sort, the list doesn’t have to contain all of the sort elements to work. A list of just a few months will still sort by month order when applying the custom sort.

Why a lively imagination may bolster security more than best practices

Knowing how to protect yourself and your privacy depends on understanding the dangers and figuring out solutions to the problems that create those threats. Knowing how to protect yourself against a virus depends on knowing why a virus is dangerous in the first place, and having at least some vague understanding of how viruses work.

And knowing how to protect yourself from the ill effects of someone using your personally identifying information to commit identity fraud depends on knowing what information people want from you for that purpose, and how they get it.

Some people rely on others to protect them, hoping those others:

  1. know what they do not know, and do not want to know, about protecting themselves–without putting in the time to learn enough about the subject to be able to actually determine whether those supposed protectors are exaggerating their skills, or
  2. care enough about their security to actually do a diligent job of protecting it–more, in fact, than they themselves care, since they are not willing to do the work for themselves–rather than only caring about whether they can be sued for failures.

As should be obvious once you understand the requirements for success of the strategy of leaving your security up to others, real security is your responsibility. This does not mean it is your fault if you are the victim of some depraved malicious security cracker’s scam, but it does mean that you must take necessary steps to protect yourself, because ultimately nobody else is likely to do as much for you as you yourself can do.

Of course, the truth is that you really cannot know how you might be subjected to misappropriation of your personally identifying information and how to protect yourself against it, to take but one example of a potential security threat.

Such knowledge is not something that can be written down and disseminated to the world, because it is not a static body of knowledge. It is dynamic and ever-changing; the field of battle on which the innovations of an arms race are constantly tested, and regularly surpassed by new innovations.

Back in the early ’80s, DES (Data Encryption Standard) was widely regarded as uncrackable, and was considered “the answer” for protecting data against unauthorized access, but by today’s cryptographic standards it is laughably vulnerable.

Understanding security is not a matter of studying and memorizing a lot of facts. It requires not knowledge so much as a way of thinking that helps you consider the way a security system can be subverted, broken, or circumvented–and, based on that, the ways it can be improved, or that its deficiencies can be mitigated by careful use or the application of additional tools that patch the holes in the shield.

As demonstrated by the events described in Quantum Hacking cracks quantum crypto, the current biggest weakness in new quantum key exchange systems is not the methods of ensuring keys have not been harvested off the wire; it is the hardware deployment used to make the quantum key exchange work in the first place.

As explained in 10 (+1) reasons to treat network security like home security, the security provided by a lock is limited by the strength of the door the lock secures and the doorframe in which the door is mounted–and a combination of strong locks, doors, and doorframes is only as secure as the window a couple feet to the left.

Obsessive focus on the intended uses of a security feature leaves you open to the unexpected. Flexibility and imagination are often more important for ensuring security against malicious security crackers and other “enemies” than slavish devotion to “best practices”.

Yes, you may have antivirus and firewall software installed on your laptop, but that will not do you much good if someone steals it from the trunk of your car. Maybe encryption can protect your data even if someone steals the laptop, but if you do not keep backups on a separate computer that will not help you finish your Master’s thesis, as the unfortunate soul whose laptop was stolen found out.

When was the last time you considered the possibility that your computer may already be infected? Do you ever think, “Oh, it’ll be no problem to leave my desk without locking the screensaver just this once!”?

Do you want to be the guy who designed an RFID system for passports to help protect your country from terrorists, and did not think to consider whether detecting particular RFID signals from passports can be used by a radio receiver or a bomb can be used to detonate the device when someone with a passport from your country walks by?

Thinking “outside the box”, taking an imaginative and flexible approach to thinking about how processes and devices can be (ab)used for purposes for which they were not designed, can actually provide you with interesting ways to protect yourself as well as alert you to ways your security might be compromised.

Consider, for instance, the fact that guns can keep computers in your luggage safe. The fact that firearms in your luggage are treated differently from computers in your luggage, in terms of how you are allowed–or required–to transport them can actually be leveraged to ensure greater safety for your computers. It also happens to point out an important fact about TSA security requirements; you are not allowed to effectively secure your luggage against theft or vandalism except in specific, uncommon circumstances.

The upshot of all of this is that Albert Einstein was right when he said “Imagination is more important than knowledge.” Security is not really about what you know; it is about how you think.

Syncing time in Linux and Windows with NTP

There are plenty of reasons you should have your Linux and Windows servers set with the correct time. One of the most obvious (and annoying) is, without the correct time, your Linux machine will be unable to  connect to a Windows Domain.

You can also get into trouble with the configuration of your mail and web servers when the time is not correct (sending e-mail from the future is never a good idea). So how do you avoid this? Do you have to constantly be resetting the time on your machines? No.

Instead of using a manual configuration, you should set up all of your servers to use NTP (Network Time Protocol) so that they always have the correct time.

Windows Server settings There is a very simple way to set your Windows Server OS (2000 and later) to use an external time server. To do this simply click on this Fixit link and the registry entries necessary to be changed will be changed and your server will start updating time from an external source.

If you are more of the DIY Windows admin, you will want to know the registry edits that are made by clicking that Fix It link. Here they are (NOTE: The Windows registry is a tool that not all users are qualified to use. Make sure you do a backup of your registry before you make any changes.):

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters\Type

Right-click Type and select Modify. Change the entry in the Value Data box to NTP and click OK.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\AnnounceFlags

Right-click AnnounceFlags and select Modify. In the Edit D Word change the Value Data to 5 and click OK.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpServer

In the right pane, right-click Enabled and select Modify. In the Edit D Word change the Value Data to 1 and click OK.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters

In the right pane, right-click NtpServer and select Modify. Change the Value Data to Peers and click OK.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\SpecialPollInterval

In the right pane, right-click SpecialPollInterval and select Modify. Change the Value Data to Seconds (where Seconds is a number representing the amount of seconds between polls; 900 seconds is ideal) and click OK.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\MaxPosPhaseCorrection

In the right pane right-click MaxPosPhaseCorrection and select Modify. Change the Value Data to Seconds (where Seconds is a number representing the amount of seconds used for positive corrections; this is used to correct for time zones and other issues).

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\MaxNegPhaseCorrection

In the right pane, right-click MaxNegPhaseCorrection and select Modify. Change the Value Data to Seconds (where Seconds is a number representing the amount of seconds used for negative corrections; this is used to correct for time zones and other issues).

Once you have made the final registry edit, quit the registry editor and then click Start | Run and enter the following command:

net stop w32time && net start w32time

Your Windows machine will now start syncing time to an external server at the set intervals.

On to the Linux server In order to get NTP up and running you first have to install the ntp daemon on the machine. This is very simple, as ntpd will be located in your default repositories. So, with that in mind, open up a terminal window and issue one of the following commands (dependent upon which distribution you are using). NOTE: If you are using a non-sudo distribution you will need to first su to the root user. Once you have administrative privileges issue one of the following:

  • sudo apt-get install ntp (for Debian-based systems).
  • yum install ntp (for Red Hat-based systems).
  • urpmi ntp (For Mandriva-based systems).
  • zypper ntp (For SUSE-based systems)

Upon installation, your NTP system should be pre-configured correctly to use an NTP server for time. But if you want to change the server you use, you would need to edit your /etc/ntp.conf file. In this file you want to add (or edit) a line to reflect your NTP needs. An entry looks like:

SERVER_ADDRESS [OPTIONS]

Where SERVER_ADDRESS is the address of the server you want to use and [OPTIONS] are the available options. Of the available options, there are two that might be of interest to you:
  • iburst: Use this option when the configured server is unreachable. This option will send out bursts of eight packets instead of the default one when trying to reconnect to the server.
  • dynamic: Use this option if the NTP server is currently unreachable (but will be reachable at some point).

By default, the /etc/ntp.conf file will look similar to this:

server 0.debian.pool.ntp.org iburst dynamic

server 1.debian.pool.ntp.org iburst dynamic

server 2.debian.pool.ntp.org iburst dynamic

server 3.debian.pool.ntp.org iburst dynamic

More than one server is used in order to assure a connection. Should one server not be available, another one will pick up the duty.

When you have everything set up correctly, enter the following command:

sudo /etc/init.d/ntp start (on Debian-based machines) OR /etc/rc.d/init.d/ntp start (on most other machines. NOTE: You will need to first su to the root user for this command to work).

Your machine should now start syncing its time with the NTP server configured.

Final thoughts
It may seem like a task that should be unnecessary, but in certain systems and configurations, the precise time is crucial. Whether you are serving up web pages, mail, or trying to connect to a Windows domain, keeping the correct time will make just about ever task either easier or simply correct.

My first IronRuby application

In my continuing exploration of IronRuby, I was in search of a good opportunity to try writing a Ruby application from scratch that was interesting but not trivial. Fortunately, TechRepublic writer Chad Perrin posted a blog that fit the bill.

To summarize his post, he was looking for a good way to find out for a roll of multiple dice of the same size, how many combinations add up to each possible sum. For example, if you have five six-sided dice, what is the number of permutations that add up to the numbers five through 30? I decided that working on his problem would be the perfect opportunity for me to really explore IronRuby.

Yes, there are some mathematical approaches to this problem that eliminate the need for super fancy programming. But I’m not a math whiz, and implementing a three-line formula wasn’t going to help me learn Ruby or use the IronRuby environment. So I set about solving the problem myself, from scratch.

My first attempt at solving this was a bit too clever by half: I tried to construct a string of code using loops and then run it through eval(). This is the kind of thing I used to do in Perl all the time.

While this approach has merit, it felt like it was too much of a workaround. The final nail in the coffin for me was that I don’t know Ruby well enough to be able to write code that writes code and have the written code actually work. Debugging eval()’ed stuff can be a nightmare, in my experience. After about 30 minutes of frustration, I took a step back.

After writing to Chad about the problems I was having, I realized that I would have been better served by using a recursive function to write my code to eval(). The major challenge with this problem is that, while it can be solved with nested loops, the number of levels of nesting is unknown at the time of writing; this is what I was hoping to mitigate with my eval() approach.

As I sat down to write the recursive version of the code generator, a lightbulb went off in my head: “if I’m writing a recursive function, why not just solve it recursively?” So I did, and less than 30 minutes later (remember, I never wrote Ruby from scratch before), I had a working application.

Now, the code isn’t perfect. At the time of this writing, it isn’t creating nice output, and it isn’t calculating the percentages. These issues are easily solved. But for my first try at this problem, I am proud of the output. See the code sample below.

def calculate (iteration, low, high, currentsum, output)
if (iteration == 1)
low.upto(high) do |value|
newsum = currentsum + value
output[newsum] += 1
end
else
low.upto(high) do |value|
calculate(iteration - 1, low, high, currentsum + value, output)
end
end
return output
end
diceInput = ARGV[0].to_i
lowInput = ARGV[1].to_i
highInput = ARGV[2].to_i
if (diceInput < 1)
puts "You must use at least one dice."
return
end
initResults = Hash.new
(lowInput * diceInput).upto(highInput * diceInput) do |value|
initResults[value] = 0
end
results = calculate(diceInput, lowInput, highInput, 0, initResults)
results.each do |result|
puts "#{result}"
end
puts "Press Enter to quit..."
gets

My thoughts about IronRuby

While working on this solution, I got more familiar with IronRuby. To be frank, it needs some work in terms of its integration with the Visual Studio IDE. As a Ruby interpreter, it seems fine (I know it doesn’t get 100 percent on the Ruby compatibility tests), but the integration isn’t what I need.

For one thing, the “Quick Watches” do not work at all from what I can tell. Setting watches does not seem to work either. You can do value inspection via the “Locals” window, though. But it’s really unpleasant to see the options you really want but not to be able to use them.

The lack of IntelliSense isn’t a deal breaker, but it sure would be nice. No F1 help is pretty harsh, especially for someone like me who is not familiar with Ruby at all. It felt very old-school to be thumbing through my copy of The Ruby Programming Language while working!

I also found it rather interesting how Ruby handles variable typing. I’m so used to Perl, where a variable’s type is essentially determined by usage on a per-expression basis.

For example, you can assign a string literal that is composed only of numbers to a variable, and then perform mathematical operations on it. In Ruby, this doesn’t happen. Instead, if I assign a string literal to a variable, it functions as a string until I assign something of a different type to that variable. While this is perfectly sensible, it went against the grain of my way of thinking. Once I got a handle on this, my work in Ruby went a lot more smoothly.

Summary
I’m certainly no Ruby expert, but at this stage in the game, I feel like it is a language that I want to continue using in my career. Ruby has a lot to offer in terms of expressiveness. Soon, I will explore its use in Windows Phone 7 applications and take a look at how it interoperates with the .NET Framework.

Where’s the Number of Pages option in Word 2007 and 2010?

Microsoft Word


Where’s the Number of Pages option in Word 2007 and 2010?

Inserting the page number in earlier versions of Word is simple. You open the header or footer and click the appropriate options on the Header and Footer toolbar. The Page Numbers option is also available from the Insert menu.

It’s still easy in Word 2007 and 2010, but finding the options might complicate the task just a bit.

To insert a page number at any time in a Word 2007 or 2010 document, click the Insert tab and then click Page Number in the Header & Footer group. Although this option is in the Header & Footer group, you can select Current Position to insert a literal page number almost anywhere in a document; you’re not limited to the header and footer sections. That’s the easy part.

You can also click the Insert tab and click Header or Footer from the Header & Footer group. The resulting gallery will offer a number of pre-defined page numbering options, although none of them offer the Page x of y format.

If you want to insert page numbers via fields, use Quick Parts. You’ll find this option in the Text group, also on the Insert tab. From the Quick Parts dropdown list, choose Field.

After selecting Field, specify one of the page numbering fields: Page and NumPages. You can combine these to create the form Page x of y. This process is probably familiar to you, but finding it via Quick Parts is new to Word 2007 and 2010. It’s probably an appropriate spot, but it might not be the first place you look.

Displaying page numbers in Word 2007 and 2010 is still easy. The Number of Pages option isn’t hidden, but you might have trouble finding it.

Microsoft Excel


Restrict duplicate data using Excel Validation

Excel sheets accept duplicate values of course, but that doesn’t mean you’ll always want to allow them. There are times when you’ll not want to repeat a value. Instead of entering a new row (or record), you’ll want the user to update existing data. You can train users, but that doesn’t mean they’ll comply. They’ll try to, but specific rules are easy to forget, especially if updates are infrequent. The easiest way to protect a sheet from duplicate values is to apply a validation rule. If a user tries to enter a duplicate value, the appropriate validation rule will reject the input value and (usually) provide helpful information as to what the user should do next.

For example, let’s suppose users track hours worked using the sheet shown below. You want each worked date (column A) entered just once–there’s no signing in and out for lunch or other activities. (This setup would be too restrictive for most situations, but it sets up the technique nicely.) Realistically, a user could easily see that they’re re-entering an existing date, but in a sheet with a lot of data, that wouldn’t be the case. At any rate, there’s nothing to stop the user from entering the same date twice.

To apply a validation rule that restricts input values to only unique values, do the following:

  1. Select A2:A8 (the cells you’re applying the rule to).
  2. Choose Validation from the Data menu and click the Settings tab. In Excel 2007/2010, click the Data tab and choose Data Validation from the Data Validation dropdown in the Data Tools group.
  3. Choose Custom from the Allow dropdown list.
  4. The Custom option requires a formula that returns True or False. In the Formula field, enter the following expression: =COUNTIF($A$2:$A$8,A2)=1.
  5. Click the Error Alert tab and enter an error message.
  6. Click OK.

Once you set the rule in place, users must enter unique date values in A2:A8. As you can see below, Excel rejects a duplicate date value, displays a simple explanation, and tells the user what to do next–click Retry and enter a unique date.

This particular validation formula accepts any value, it just won’t accept a duplicate value. The cell format is set to Date, which restricts entry to date values. You can use this formula to restrict any type of data, not just date values.

Microsoft Office


Disable printer notification in the Windows System Tray

When you send something to the printer, Windows displays a small balloon from the System Tray that identifies the document you’re printing, and that includes Office documents. I find this notification annoying.

Fortunately, you can disable this annoying feature, as follows (in Windows XP):

  1. Click the (Windows) Start menu and choose Printers and Faxes.
  2. In the Printers and Faxes window, choose Print Server Properties from the File menu. In Windows 7, Print Server Properties in on the window’s toolbar.
  3. Click the Advanced tab.
  4. Clear the Show Informational Notifications For Local Printers option.
  5. Clear the Show Informational Notifications for Network Printers option (if applicable).
  6. Click OK.
  7. Close the Printers and Faxes window.

Depending on your system’s configuration, you might have to disable both local and network printers.

You’re probably wondering if this feature really annoys me as much as I say or if I’m just employing a clever writing device to make this entry more interesting. Honestly, it’s an annoying interruption I can definitely live without.

Of course, to more patient folk, this interruption seems insignificant. After all, it doesn’t keep you from working and it will disappear on its own, eventually. Most people will ignore it, but it diverts my attention, even after all this time. If it annoys you as much as it annoys me, you’ll appreciate knowing how to disable it!

5 tips for deciding whether to virtualize a server

Even though server virtualization is all the rage these days, some servers simply aren’t good candidates for virtualization. Before you virtualize a server, you need to think about several things.

Here are a few tips that will help you determine whether it makes sense to virtualize a physical server.

1: Take a hardware inventory
If you’re thinking about virtualizing one of your physical servers, I recommend that you begin by performing a hardware inventory of the server. You need to find out up front whether the server has any specialized hardware that can’t be replicated in the virtual world.

Here’s a classic example of this: Many years ago, some software publishers used hardware dongles as copy-protection devices. In most cases, these doubles plugged into parallel ports, which do not even exist on modern servers. If you have a server running a legacy application that depends on such a copy-protection device, you probably won’t be able to virtualize that server.

The same thing goes for servers that are running applications that require USB devices. Most virtualization platforms will not allow virtual machines to utilize USB devices, which would be a big problem for an application that depends on one.

2. Take a software inventory
You should also take a full software inventory of the server before attempting to virtualize it. In a virtualized environment, all the virtual servers run on a host server. This host server has a finite pool of hardware resources that must be shared among all the virtual machines that are running on the server as well as by the host operating system.

That being the case, you need to know what software is present on the server so that you can determine what system resources that software requires. Remember, an application’s minimum system requirements do not change just because the application is suddenly running on virtual hardware. You still have to provide the server with the same hardware resources it would require if it were running on a physical box.

3. Benchmark the system’s performance
If you are reasonably sure that you’re going to be able to virtualize the server in question, you need to benchmark the system’s performance. After it has been virtualized, the users will be expecting the server to perform at least as well as it does now.

The only way you can objectively compare the server’s post-virtualization performance against the performance that was being delivered when the server was running on a dedicated physical box is to use the Performance Monitor to benchmark the system’s performance both before and after the server has been virtualized. It’s also a good idea to avoid over-allocating resources on the host server so that you can allocate more resources to a virtual server if its performance comes up short.

4. Check the support policy
Before you virtualize a server, check the support policy for all the software that is running on the server. Some software publishers do not support running certain applications on virtual hardware.

Microsoft Exchange is one example of this. Microsoft does not support running the Unified Messaging role in Exchange Server 2007 or Exchange Server 2010 on a virtual server. It doesn’t support running Exchange Server 2003 on virtual hardware, either.

I have to admit that I have run Exchange Server 2003 and the Exchange Server 2007 Unified Messaging role on a virtual server in a lab environment, and that seems to work fine. Even so, I would never do this in a production environment because you never want to run a configuration on a production server that puts the server into an unsupported state.

5. Perform a trial virtualization
Finally, I recommend performing a trial virtualization. Make a full backup of the server you’re planning to virtualize and restore the backup to a host server that’s running in an isolated lab environment. That way, you can get a feel for any issues you may encounter when you virtualize the server for real.

Although setting up such a lab environment sounds simple, you may also have to perform a trial virtualization of some of your other servers. For example, you might need a domain controller and a DNS server in your lab environment before you can even test whether the server you’re thinking about virtualizing functions properly in a virtual server environment.

Brien Posey is a seven-time Microsoft MVP. He has written thousands of articles and written or contributed to dozens of books on a variety of IT subjects.

Take the ‘policy’ out of IT

Reading the admonishments of the IT “establishment”, one could be excused for thinking we were becoming politicians or diplomats.

According to the pundits, each new technology and innovation requires a raft of overwrought “policy” documents. Whether it’s social media, cloud computing, or boring old desktop usage, apparently the ultimate expression of IT value is producing a multichapter treatise of do’s and don’ts that will likely be immediately filed in the bin by those who have actual work to do at your company.

The butt of most corporate jokes, our friends in HR, are another business unit historically mired in policy and in too many cases blind to its actual benefits to the company (or lack thereof).

Think of the last time you received a series of e-mail blasts addressed to every employee of your company, heralding the arrival of a new HR policy with the breathless zeal usually reserved for the latest teen celebrity. Was your reaction to drop everything you were doing, click the “refresh” button with bated breath until the newest HR policy appeared on the screen, and read every line with unreserved zeal?

If you are like most normal workers, you are overloaded with work, and if you expend more than eight seconds of consideration on a new HR policy, you are probably 100 percent more diligent than your peers. IT policies are greeted with similar distain and perhaps even less enthusiasm than HR policies simply because HR is the most visible entity in getting paychecks out the door.

Rather than rushing to sign a raft of consultants to a six-figure engagement to develop the perfect IT policy, consider the following.

Treat your employees like adults until proven otherwise
Unless you have reason to suspect otherwise, you can safely treat your employees like adults. Certainly there is some percentage of them who will run an imaginary farm or mafia family during business hours, but more than likely that same demographic is sneaking a peek at their Blackberry or answering a business-related phone call in the off hours. Consider for a moment that these people are likely intelligent enough to realize that Mafia Wars is not work-related, so is a 50-page policy document from IT really going to change this behavior?

In most companies, people are regularly entrusted with million-dollar decisions and are usually able to manage these responsibilities quite capably without a policy document. Apply the same basic logic to your IT resources. Expect your people to make the right decisions without unwieldy lists of “don’ts.”

Just as when someone makes an inappropriate business decision or steals company resources and they are appropriately punished, educate and reprimand those who misuse IT resources without treating the rest of your staff like children.

Help staff use new tools appropriately
Rather than trying to craft a manifesto, work with interested parties to demonstrate new technologies or educate staff when a publicly available technology might be inappropriate for corporate use. Spend an afternoon with the marketing folks explaining the latest presence-based social media tools, and IT becomes a trusted advisor rather than the draconian “Facebook police”.

Should you see a Web-based technology that poses a definite risk to information security, educate staff on the risk and provide an alternative. Perhaps you don’t want employees putting sensitive internal information on a cloud-based storage site; if you can explain the risks in nontechnical terms and provide a reasonable alternative, most employees are willing to work with you and even offer suggestions on how IT might be able to meet a business need. If you block the latest service, you’ll spend years playing cat and mouse as users thwart each new block you put in place.

Policies make you look silly
One of the most overlooked points is that overwrought policy documents make IT look silly. Most CIOs are clamoring for the illusive concept of “IT alignment”, where IT is perceived as an integral part of the business rather than a cadre of internal order takers. The whole concept of extensive policy documents makes IT seem out of touch.

If you can intelligently summarize the risks and associated benefits of new technologies to your executive peers, you can jointly develop a strategy for monitoring and mitigating the risks and promoting and leveraging the benefits. This can and should be a sidebar discussion to IT’s other activities. When producing policies is the crowning achievement of an IT organization, it looks all the more compelling to outsource IT.

Patrick Gray is the founder and president of Prevoyance Group, and author of Breakthrough IT: Supercharging Organizational Value through Technology. Prevoyance Group provides strategic IT consulting services to Fortune 500 and 1000 companies.

5 easily overlooked desktop security tips

The desktop computer is the heart of business. It is, after all, where business gets done. But so much effort goes into securing our servers (and with good reason), that often the desktops are overlooked.

But that does not need to be the case. Outside your standard antivirus/anti-malware/firewall, there are ways of securing desktops that many users and techs might not think about. Let’s take a look.

1: Patch that OS
Although many updates occur for feature-bloat, some updates do in fact happen for security reasons. One of the first things you should do, prior to deploying a desktop, is apply all the patches available for it.

Do not deploy a desktop that has known, gaping security holes. If you are deploying a desktop that has not been fully updated, it will be vulnerable from the start. And this tip applies to all platforms, not just Windows.

2: Turn off file sharing
Those who must share files can ignore this tip. But if you have no need to share files on your desktop, you should turn this feature off.

For Windows XP, click Control Panel | Network Connections | Local Area Connection Properties. From that window deselect File And Printer Sharing, and you’re good. In Windows 7, open the Control Panel and then go to the Network And Sharing Center. Now click Change Advanced Sharing Settings in the left pane. From this new window, expand the network where you want to disable sharing and select Turn Off File And Printer Sharing. Done.

3: Disable guest accounts/delete unused accounts
Guest accounts can lead to trouble. This is especially true because so many users leave guest accounts without password protection. This might not seem like a problem, since the guest user has such limited access. But giving access to a guest user creates a security risk. You are much better off disabling the guest account.

The same goes for unused accounts. This is such a common mistake. Machines get passed around from user to user in many businesses, and the old users do not get deleted. Don’t let this happen to you. Make sure the users on your system are actually active and need access to the machine. Otherwise, you have yet another security hole.

4: Employ a strong password policy
This should go without saying. SHOULD. But how many times do you come across the word “password” as a password? Do not allow your users to make use of simple passwords. If the password can be guessed with little effort, that password should never be used.

This can be set in server policies. But if you don’t take advantage of policies, you will have to enforce this on a per-user basis. Do not take this lightly. Weak passwords are one of the first ways a machine is compromised.

5: Mark personal folders/files private
You can enable folder/file sharing on a machine but still have private folders. This is especially important for personal information. Some businesses might not allow personal files to be saved on desktop machines, but that’s a rarity. If you work in a company that allows you to house personal data, you probably won’t want your fellow employees to have access to it.

The how-to on this will vary from platform to platform (and is made even more complex by the various editions of the Windows platform). But basically, you change the security permissions on a folder so only the user has access to the folder. To do this, right-click on the folder and select Properties. From within the Properties window, go to Security and edit the permissions to restrict access to just the user.

When ‘open source’ software isn’t truly open source

Richard Stallman may have kick-started the Free Software and open source software community, but the Open Source Initiative was founded in 1998 by Bruce Perens and Eric Raymond to offer an alternative approach to defining and promoting the same software development processes and licenses, and that approach has gotten the lion’s share of public recognition since.

Free Software is a term that both promotes Stallman’s ideological goals regarding how software is distributed, thus turning off business-oriented software users who disagree with Stallman’s ideology, and manages to conflate itself with software that simply doesn’t cost anything.

Perens and Raymond coined the term “open source software” to refer to software developed under essentially the same conditions as Stallman’s Free Software definition.

Once the term “open source software” was coined, it was also defined. The official Open Source Definition is clear, and explains how software must be licensed to qualify as open source software.

The specific points of the Definition address issues related to:

  1. Redistribution
  2. Source code
  3. Derived works
  4. Integrity of the author’s source code
  5. No discrimination against persons or groups
  6. No discrimination against fields of endeavor
  7. Distribution of license
  8. License product specificity
  9. License restrictions on other software
  10. License technology neutrality

A summary of the effect of the conditions mandated by that Definition is available in the Wikipedia article about open source software:

Open source software (OSS) is computer software that is available in source code form for which the source code and certain other rights normally reserved for copyright holders are provided under a software license that permits users to study, change, and improve the software.

Unfortunately, the fact is that many people misuse the term “open source software” when referring to their own software. In the process of looking for a decent dice roller IRC bot in 2010, I came across one called Bones. On the announcement page for it, Bones: Ruby IRC Dicebot, its author said the dicebot is:

Free: Like most IRC bots, Bones is open source and released free of charge.

In subsequent e-mail discussion with its author, it turned out that his definition of “open source” is substantially different from that of the Open Source Initiative, me personally, and the entire open source community–to say nothing of Microsoft, Oracle, tech journalists, and just about everybody else who uses the term:

Question: You said in your page for bones that “Like most IRC bots, Bones is open source and released free of charge.”  What open source license are you using for it?

I haven’t released it under any particular open source license. It’s only open source in so far as it isn’t closed source.

In an attempt to clarify the legal standing of the IRC bot, I asked further:

Any chance I could get you to let me do stuff with it under a copyfree license (my preference is the Open Works license, though BSD and MIT/X11 licenses or public domain are great too) so I can hack on yours a bit rather than just having to start over from scratch?

I don’t plan on releasing it under a license, but that shouldn’t stop you from making changes to the code if you like.

Of course, if I have the source in my possession–which is pretty much a given for any Ruby program–I can indeed make changes to it if I like. The point he ignored is that without a license setting out what permissions the copyright holder grants to recipients of the code, these recipients cannot legally share changes with others.

This effectively made any interest I had in improving his program dry up and blow away. It also effectively means that when he called it “open source”, he made an error either of ignorance or of deception. When I pointed out to him the legal problems involved, he declined to respond.

There is some argument to be made as to whether a license that conforms to the requirements of the Open Source Definition should be called an “open source” license even if it has not been certified by the OSI itself. Many of us are inclined to regard a license as an open source license if it obviously fits the definition, regardless of certification.

By that standard, the Open Works License and WTFPL (Note: the full name of the WTFPL may not be safe for reading at work, depending on your workplace; be careful clicking on that link) are open source licenses.

By the standards of the list of OSI approved licenses, however, they are not–because the OSI requires an extensive review process that lies somewhat outside the range of what many would-be license submitters have the time and resources to pursue.

Let us for argument’s sake accept that merely conforming to the Open Source Definition is sufficient to call a license “open source”, regardless of official approval by the OSI. By contrast with the Bones IRC bot, then, an IRC bot called drollbot (part of the larger droll project) that I wrote from scratch to serve much the same purpose as Bones actually is open source software, released under the terms of the Open Works License.

The simple comparison of Bones with drollbot serves to illustrate the difference between what really is open source software and what only pretends to be. The pretense, in this case, is an example of something many people call “source-available software”, where the source code is available but recipients are granted no clear legal permission to modify, redistribute, and even sell the software if they so desire–requirements of both the Open Source Definition and the Free Software Foundation’s definition of Free Software.

There are many other concerns related to how we classify software and the licenses under which we distribute it, but many of them are secondary to the simple necessity of understanding what is or is not open source software at its most basic level. Whether or not you consider a piece of software to qualify as “open source software” if the license under which it is distributed has not been officially approved by the OSI, one thing is clear. That is the fact that before you go around telling people to download your “open source software”, you should give them assurances of the most basic requirement of open source software as differentiated from software that has merely been written in a language traditionally run by an interpreter rather than compiled to an executable binary:

When you call something “open source software”, you must give all recipients a guarantee that they may modify and redistribute the software without fear of lawsuits for copyright violation. If you do not do that, by way of a license that conforms to the Open Source Definition or by releasing the software into the public domain, what you give them is not open source software. Period.

Memoize recursive functions to conserve resources

Memoization is a form of caching that is used to reduce duplication of effort by your program. In short, it is a means of caching results so that when generating large data sets the same results do not need to be recalculated as part of an otherwise elegant algorithm.

The most common use of memoization is in recursion. Most programmers are aware of recursion; some even understand and use it. The majority of modern functional programming languages provide tail call optimization, but those languages that do not (usually object oriented or merely procedural) include some of the most widely used programming languages.

Tail call optimization is typically considered in terms of an interpreter or compiler that optimizes a recursive function so that it will not perform the same operations over and over again. In languages that lack tail call optimization, a similar effect is achieved through memoization.

An example of an algorithm that could benefit greatly from tail call optimization or memoization is the recursive definition of a Fibonacci number:

F(0) = 0
F(1) = 1
F(n > 1) = F(n-1) + F(n-2)

This is a prime example of the importance of optimization in programming. The simplistic recursive source code for this in Ruby would look something like this:

def fib(n)
  return n if [0,1].include? n
  fib(n-1) + fib(n-2)
end

Unfortunately, the most widely used production version of Ruby today, Ruby 1.8.7, does not support tail call optimization (look for it in Ruby 1.9+). If n = 4 in the above code, it ends up being calculated like this:

fib(4)
(4 - 1)
    + (4 - 2)
(((4 - 1) - 1) + ((4 - 1) - 2))
    + (((4 - 2) - 1) + ((4 - 2) - 2))
((((4 - 1) - 1) - 1) + (((4 - 1) - 1) - 2)) + (3 - 2)
    + (2 - 1) + (2 - 2)
(((3 - 1) - 1) + ((3 - 1) - 2)) + 1
    + 1 + 0
((2 - 1) + (2 - 2) + 1
    + 1
(1 + 0) + 1
    + 1
1 + 1
    + 1
2
    + 1
3

That’s a lot of effort just to get the number three. The problem is that, until the numbers start getting down to 0 or 1, every operation requires two sub-operations. With a high enough Fibonacci number to start the process, the number of operations required gets absolutely insane. Using the Ruby REPL, you can see that the fourth Fibonacci number requires nine calls to the fib() method, and the fifth Fibonacci number requires 15 calls:

> irb
irb(main):001:0> $opcount = 0
=> 0
irb(main):002:0> def fib(n)
irb(main):003:1>   $opcount += 1
irb(main):004:1>   return n if [0,1].include? n
irb(main):005:1>   fib(n-1) + fib(n-2)
irb(main):006:1> end
=> nil
irb(main):007:0> fib 4
=> 3
irb(main):008:0> $opcount
=> 9
irb(main):009:0> $opcount = 0
=> 0
irb(main):010:0> fib 5
=> 5
irb(main):011:0> $opcount
=> 15

By the time you get to fib 20, you have 21,891 calls to fib(). fib 30 took more than 10 seconds to complete in a test run, and 2,692,537 calls to fib().

Memoization greatly reduces the number of such operations that must be performed, by only requiring each Fibonacci number to be calculated once.

For the simple version, start by creating an array to hold already calculated numbers. For each calculation, store the result in that array. For each time a recursive function would normally calculate one of those numbers, check to see if the number is stored in your array; if so, use that, and if not, calculate and store it. As a jump-start to the array, set the first two elements to 0 and 1, respectively.

In Ruby, a Fibonacci number generator might be modified to look like this:

$fibno = [0,1]
def fib(n)
  return n if $fibno.include? n
  ($fibno[n-1] ||= fib(n-1)) + ($fibno[n-2] ||= fib(n-2))
end

By caching values as you go so that fewer recursive calls are needed, you can get the result of fib 30 pretty much instantaneously. The total number of recursive calls to fib() is reduced from 2,692,537 to a mere 29. In fact, the number of calls to fib() increases linearly, so that the number of calls is always equal to the ordinal value of the Fibonacci number you want minus one. That is, fib 50 makes 49 calls to fib(), and fib 100 makes 99 calls to fib().

That assumes you reset $fibno every time. You can leave it alone, and reduce the number of calls to fib() even more on subsequent calls. For instance, try fib 100 with $fibno = [0,1], and 99 calls to fib() will be made. Try fib 40 without resetting $fibno, though, and only one call to fib() will be made, because $fibno already contains the appropriate value.

You can also use a somewhat simpler approach to caching than the above example. Instead of the number of calls to fib() only increasing by one for each increase in the ordinal Fibonacci value, it increases by two, resulting in 59 operations instead of 29 for fib 30:

$fibno = [0,1]
def fib(n)
  $fibno[n] ||= fib(n-1) + fib(n-2)
end

Similar caching mechanisms can be used to achieve similar effects in other languages that do not optimize tail calls, such as C, Java, and Perl. In fact, in Perl this caching idiom has a special name: the Orcish Maneuver. The name comes from the or-equals operator, ||=, which can be pronounced “or cache” in cases such as memoization.

Say “or cache” very quickly, and you get the name of something that bears the stamp of a favorite fantasy monster. Perhaps this is how an Orc would optimize a recursive function, after all.

In Perl, the term Orcish Maneuver is typically applied to sorting functions rather than recursive series generation functions as in the case of memoizing a Fibonacci number generator. The canonical Perl example of the Orcish Maneuver looks something like this:

my @words = (
  'four',
  'one',
  'three',
  'two'
);

my %numbers = (
  'one' => 1,
  'two' => 2,
  'three' => 3,
  'four' => 4,
);

my @numerically_sorted_words;

sub str2n {
  my $key = shift;
  $numbers{$key};
}

{ my %cache;
  @numerically_sorted_words = sort {
    ($cache{$a} ||= str2n($a)) <=> ($cache{$b} ||= str2n($b));
  } @words;
}

foreach (@numerically_sorted_words) { print "$_\n"; }

A probably more useful application of Perl’s Orcish Maneuver would be for month names, but this example at least shows how the maneuver is used.

5 tips for building a successful global IT workforce

Successful managers agree: The strength of an organization’s IT talent pool is a critical component to building and growing a successful company.

And today’s global environment makes it possible for an organization to build its workforce without restrictive geographical boundaries. An organization can pull talent and resources from around the world to build the strongest and most efficient team possible.

As a result, it is important for organizations to develop a systematic approach for recruiting and maintaining talent on a global level. In addition, organizations must implement strategies to optimize and harness the global IT service talent that best meets their IT service business requirements.

The following five tips will help CIOs and executives recruit global IT talent more effectively to ensure their organization’s workforce is built for success.

1: Set objectives but allow for flexibility in roles
Setting goals and objectives is necessary when determining roles and responsibilities in an organization; however, it is also important to maintain flexibility to make room for individuals’ unique skills and experiences.

Maintaining a capable IT workforce is most effective when role requirements are clearly defined while still being flexible enough to incorporate the broad range and the scope of skills and talents available. For example, executives may consider redesigning service technicians’ roles so an employee’s unique skill set can shine through.

By remaining flexible, it is easier to ensure organizational culture will support a diverse group of employees who thrive by playing up their strengths and following their instincts.

2: Recruit and promote from within
Companies should work to identify internal resources to develop and grow their workforce. Many managers have found that their company’s most valuable resources lie inside the organization. Given the right training and support, internal candidates are put in a position to perform a broader variety of tasks, a particularly vital capability within the IT service industry where technologies and processes are constantly evolving.

3: Hire for innate talents and be willing to invest in training
Most executives trying to build a successful organization understand that it’s important to find a balance between innate abilities and specific experience or qualifications when looking for IT talent. Often, hiring managers find a candidate who may have the right personality, experience, and problem-solving skills but may lack a particular certification or technical skill set.

When it comes to pooling talent and building an IT workforce, decision makers need to understand that even though certain skills can be taught, the innate abilities and attitude of a prospective hire can’t be instilled with training. When a candidate with the right personality becomes available–even if he or she lacks a certain skill set–it helps to define the skills required for the position and determine whether certification gaps can be successfully achieved through training.

4: Build a candidate pipeline
To maintain the most efficient and well-balanced IT workforce, it’s essential to be consistently on the lookout for talent. This is even more important as talent pools continue to globalize, resulting in larger and more diverse candidate pools. Having a candidate pipeline will reduce the likelihood an organization will be caught off guard or unprepared when a position opens up.

Managers should ensure that they are never making hasty decisions or missing opportunities for talent. Examine your organization’s business plan and try to anticipate future needs, including geographical expansions or relocations. Network and nurture relationships in an effort to recruit talent that aligns with your organization’s future needs and direction.

5: Diversify
The most diverse organizations tend to be those that are flexible and strategic about recruiting and maintaining IT talent. Executives at these organizations understand the value of diversity in experience, perspective, and skill when building a workforce.

This is even more important when building a global IT workforce, as there is a greater opportunity to connect with individuals with a wide variety of skill sets. By leveraging the existing talent pool, nurturing global networks, and investing in diversity, organizations can effectively mine new IT service talent sources, build multicultural talent, and be well positioned for market success.

Summary
In today’s global environment, decision makers must consider business goals and the direction of the organization when mining talent and managing an IT workforce. By prioritizing company needs, being open to diversity and unique skill sets, and valuing the talent that already exists internally, executives are more apt to employ a workforce aligned with company values–one that plays a strategic role in a company’s ability to develop new services and expand into new markets.

By focusing on having the right resources to identify new sources of talent, optimizing the IT talent pool and building a strong talent pipeline, organizations can gain access to skills that support business goals, build bench strength and recruit effectively to enhance competitive advantage.

Jay Patel is director of professional services for Europe, Middle East, and Africa at Worldwide TechServices.

Configure a time server for Active Directory domain controllers

Time management is one of the more critical aspects of system administration. Administrators frequently rely on Active Directory to sync time from client servers and workstations to the domain. But where does Active Directory get its time configuration?

Well, that depends on various factors. Default installations may go directly to Microsoft, and virtual machines may set themselves to update to the host servers.

The best way to ensure the time is accurate on a consistent basis is to establish one authoritative time source for your organization. An authoritative time source is the time server(s) that all systems on your network trust as having the accurate time.

The source can be an Internet time server or the pool, or it can be something you fully administer internally. Regardless, a designated authoritative time source for a given organization should be determined ahead of time.

From there, you can configure Active Directory domain controllers with the PDC emulator role in a domain to use this list of servers explicitly for their time. Read this TechNet article to learn how the time service operates within a forest. The main takeaway is the w32tm command is used to set a list of peers for specifying where time is sourced for a domain.

The command snippet below sets the time peer to an Internet NTP server:

w32tm /config /manualpeerlist:”nist.expertsmi.com” /syncfromflags:manual /reliable:yes /update

If you want to put in a pool of servers, they can be separated by a space. When executed on a domain controller, it executes once and is reflected in the registry. Figure A shows this on a sample domain controller.

Figure A

Click the image to enlarge.

I recommend applying this configuration to all domain controllers and possibly even making it a Group Policy object as a startup script for the \Domain Controllers organization unit within Active Directory.

This tip applies to current Windows Server technologies, though not much has changed over the years with regard to this topic. See what I mean by reading this tip by Mike Mullins posted in February 2006: Synchronize time throughout your entire Windows network.

What you need to know about OpenSSH key management

OpenSSH is one of my favourite tools, and one I take for granted because I use it all day, every day.

It is a Swiss Army-knife of coolness that can be used to provide secure connections to insecure sites in insecure places like free Wi-Fi-offering coffee shops. OpenSSH can be used to remotely administer systems, provide encrypted file sharing via sshfs, bypass draconian corporate firewall policies (okay, maybe that isn’t the best example of OpenSSH coolness), and a whole lot more.

Before you’re really able to appreciate all that OpenSSH has to offer, you have to learn the basics, and that means key management. So we’re going to look at how to manage and use OpenSSH public/private keypairs.

Generating OpenSSH keys is easy, and doing so allows for passphrase-based keys to be used for login authentication instead of providing your password. This means you have the private key stored locally, and the public key is stored remotely. The two keys together form a cryptographically secure keypair used to perform authentication, without sending a password over the network.

To generate an RSA2 key (default 2048 bits) with a special comment to identify its use and saved to ~/.ssh/server1_rsa and ~/.ssh/server1_rsa.pub, use:

$ ssh-keygen -C "special key for server foo" -t rsa -f ~/.ssh/server1_rsa
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/joe/.ssh/server1_rsa.
Your public key has been saved in /home/joe/.ssh/server1_rsa.pub.
The key fingerprint is:
fb:8a:23:82:b9:96:a1:9c:d5:62:58:15:9a:8f:f9:ed special key for server1
The key's randomart image is:
+--[ RSA 2048]----+
|     ..          |
|    o.           |
|   o.            |
|   .+            |
|  oo..  S        |
| o +...  .       |
|oo* .. ..        |
|+=. . o. .       |
|o. . ..E...      |
+-----------------+

Keeping this key to yourself isn’t useful, so it needs to be copied to a remote server where it will be used. You can do this manually by copying it over and then moving it into place, or you can use the ssh-copy-id command:

$ ssh-copy-id -i ~/.ssh/server1_rsa joe@server1.domain.com

Once you provide the account password, the ~/.ssh/server1_rsa’s public key will be copied to the remote server and placed into ~/.ssh/authorized_keys. You should then be able to log in using the key, and its passphrase, from that point forward.

Using the ~/.ssh/config file can really make life easier. With that configuration file, you can easily setup various options for different hosts. If you wanted to have multiple SSH public/private keypairs, and want to use a specific keypair for a specific host, using ~/.ssh/config to define it will save some typing. For instance:

Host server1 server1.domain.com  Hostname server1.domain.com  User joe
  IdentityFile ~/.ssh/server1_rsa  Host server2 server2.domain.com
  Hostname server2.domain.com  User root  IdentityFile ~/.ssh/server2_rsa

In this example, when you do ssh server1, it will connect to server1.domain.com using the private key in ~/.ssh/server1_rsa, logging in as “joe”. Likewise, when connecting to server2.domain.com, the ~/.ssh/server2_rsa key is used and you will connect as the root user.

If you have changed the remote server’s SSH key (either by installing a new operating system, re-using an old IP address, changing the server keys, whatever), and you have strict key checking enabled (usually a default), you may see a message like this:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!

Warnings like this should be taken seriously. If you don’t know why the key has changed, before making any changes and assuming it’s benign, or even completing the login, find out from the administrator of the box. If you know this is an expected change, then use the ssh-keygen tool to remove all keys that belong to that particular host from the known_hosts file (as there may be more than one entry for the host):

$ ssh-keygen -R server1.domain.com

This is especially useful if you are using hashed hostnames. What are hashed hostnames, you ask? Hashed hostnames are a way to make the known_hosts file not store any identifying information on the host. So in the event of a compromise, an attacker would be able to obtain very little information from the hashed file. If you had an entry like this in your ~/.ssh/known_hosts file:

server4,10.10.10.43 ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAIEAtNuBVGgUhMchJoQiDTZ+Nu1jzJOXxG9vo5pVWSbbic4kdAMggWrdh
XBU6K3RFIEwxx9MQKR81g6F8shV7us0cc0qnBQxmlAItNRbJI8yA4Ur+2ggFPFteqUEvOhA+I7E8REcPX87
urxejWK3W11UqOXyjs7cCjoqdps8fEqBT3c=

This clearly identifies that you have at some point connected to “server4”, which has an IP address of 10.10.10.43. To make this information unidentifiable, you can hash the known_hosts file to make the above look like this instead:

|1|sPWy3K2SFjtGy0jPTGmbOuXb3js=|maUi1uexwObad7fgjp4/TnTvpMI= ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAIEAtNuBVGgUhMchJoQiDTZ+Nu1jzJOXxG9vo5pVWSbbic4kdAMggWrdh
XBU6K3RFIEwxx9MQKR81g6F8shV7us0cc0qnBQxmlAItNRbJI8yA4Ur+2ggFPFteqUEvOhA+I7E8REcPX87
urxejWK3W11UqOXyjs7cCjoqdps8fEqBT3c=

It’s the same host, but now in a format that only ssh and sshd will understand. This is where the ssh-keygen -R command is so necessary, since trying to find the entry for host “foo.domain.com” in a hashed file would be impossible. To hash your known_hosts file, use:

$ ssh-keygen -H

There is so much that can be done with OpenSSH, and this tip mostly dealt with key management. Next, we will look at some quick one-liner commands to help accomplish some basic, and some not-so-basic, tasks with OpenSSH.

Open source is not just for Linux: 14 apps great for Windows users

Recently I had a client that had a need that simply couldn’t be fulfilled with proprietary software. Well, that’s not exactly true. There were plenty of proprietary titles that could do what she needed done, but none that were at her budget.

So I did what any advocate of open source software would do–I introduced her to the world of FOSS. She was amazed that so much software existed that was not only quality, but very cost effective.

That little interaction reminded me that the biggest hurdle open source software faced was not an incompatibility, or lack of solid code–but the lack of recognition. The majority of Windows users out there believe if you want good software you have to pay for it. So I decided to highlight the open source projects out there that run on Windows so you could, in turn, help spread the word by using and promoting these tools to your fellow Windows users.

Now…on to the software.

#1 LibreOffice: This one is, with the exception of the “new name”, obvious. If you are looking for the single best replacement for MS Office, look no further than LibreOffice. Yes, it is a fork of OpenOffice, but it forked at version 3.x so it benefited from an already solid code base. This piece of software is a must-have for open source advocates. And don’t worry, although it may claim to be in “beta”, many users (including myself) are using it in production environments.

#2 Scribus: If you are looking for desktop publishing for creating marketing materials, manuals, books, fliers, etc.–look no further than Scribus. Scribus can do nearly everything its proprietary counterparts can do (such as PageMaker and QuarkXPress) only it does it with a more user-friendly interface and doesn’t require nearly the resources the competition begs for.

#3 The GIMP: Need a raster editor? The GIMP is as powerful as Photoshop and costs roughly US$700 dollars less. And if you’re unhappy with The GIMP’s current interface, hold off until around March when the new single-windowed interface will arrive. Take a look at how the new UI is evolving at the Gimp Brainstorm.

#4 Inkscape: Inkscape is to vector graphics what The GIMP is to raster graphics. Of course anyone that has worked with vector graphics knows they are not nearly as easy to work with as raster graphics, but Inkscape goes a long way to making that process as easy as it can be.

#5 GnuCash: This is the de facto standard accounting software for Linux. GnuCash is amazing in features, usability, and reliability. I have been using GnuCash for years and have yet to encounter a single problem. It does reporting, double-entry accounting, small business accounting, vendors/customers/jobs, stock/bond/mutual fund accounts, and much more.

#6 VLC: Video Lan is the multimedia player that can play nearly everything. In fact, VLC claims, “It plays everything”. I can vouch for that claim. I have yet to find a multimedia format VLC couldn’t handle. Ditch Windows Media Player, what with it’s crash-prone, resource hog behavior, and migrate to a light-weight, reliable, all-in-one multimedia player.

#7 Firefox: Another open source project that goes without saying. Firefox is quickly helping the “alternative browsers” to usurp the insecure, unreliable IE as the king of browsers. Firefox 4 should be out very soon and it promises more speed and security.

#8 Claws Mail: This is my mail client of choice. Not only is Claws Mail cross-platform, it’s also the single fastest graphical mail client available. If you want a mail client that starts up in mere seconds, has plenty of plugins, and can be configured more than any other mail client Claws Mail is your tool. Unfortunately Claws Mail can not connect to an Exchange server, but for all of your POP/IMAP accounts, this is what you need.

#9 VirtualBox: No, not everyone is working with virtual machines, but for those of you who are, make sure you give VirtualBox a go before you dive in and purchase VMWare. VirtualBox has many of the features that VMWare offers but can bring you into the world of virtual machines without the overhead cost of VMWare.

#10 TrueCrypt: This is one of those applications for the paranoid in all of us. If you need encrypted filesystems to safely hide away all of your company secrets, or just your personal information, then you need to try TrueCrypt. TrueCrypt creates a virtual encrypted disk that can be mounted and unmounted only with the configured passphrase. Without that passphrase the data within the filesystem can not be reached. Just make sure you do not forget or lose that passphrase.

#11 Calibre: With the amazing growth of ebooks (Amazon reported 2010 saw 60% of all books sold were ebooks), people need an easier way to manage their collections or convert their files/books to a readable ebook format. Calibre is one of the best tools for this job. I have four ebooks on sale at various ebook resellers (check Smashwords for me) and have used Calibre to help manage the conversion from .rtf format to a usable file.  The only format Calibre has trouble formatting to is PDF.

#12 Audacity: Anyone that needs audio editing software should take a look at this power, open source selection. Audacity will enable you to create podcasts, music, convert audio to various formats, splice files together, change pitch of files, and much more.

#13 PeaZip: Who doesn’t have to work with archives? Nearly every PC user has had to unzip a file or create an archive for emailing. Why not do this with an open source tool that can handle nearly every archiving format on the planet?

#14 ClamWin: Why wouldn’t you trust an anti-virus solution created by open source developers? You should. ClamWin is a solid antivirus solution and should soon have the real-time antivirus solution completed. If you need an antivirus solution that doesn’t drag your machine to a screeching halt during scans or insists of installing add-ons you do not want or need, give ClamWin a try.

I could go on and on with the list of open source software for Windows, but you get the idea. Open source is not just for Linux users. Users of all platforms can benefit from adopting open source titles. Not only will these software solutions save you money immediately, they will save you more and more money over time as you don’t have to pay for software support when something goes wrong–just e-mail a developer or hit the forums to find quick and available solutions.

Open source is not ideal for every situation, but you will be surprised how many times you will find an open source solution superior to its proprietary cousins.

Cloud, mobility transform software space

The maturing cloud computing model and rise of enterprise mobility are making their mark on the software industry, impacting the way independent software vendors (ISVs) and system integrators (SIs) do business and opening the doors for non-enterprise players to enter the space, note industry insiders.

According to Trent Mayberry, Accenture’s technology geographic lead for Asean, as software moves away from the shrink-wrapped sales and app distribution models, toward the software-as-a-service model, there will be changes in the way ISVs and SIs operate.

With regard to ISVs, he said: “Governance is perhaps one area that will take new forms. Processes that have been traditionally rigid will change and give way to more adaptive models to encourage the viral adoption that cloud and mobility promises.”

Mayberry also noted that enterprises are looking at significantly changing or eliminating existing business processes internally, as well as engaging with customers in new ways. As a result, SIs will have to adapt to customers’ changing needs to stay relevant, he said.

Gartner’s research director for software markets, Yanna Dharmastira, concurred. She pointed out that within the infrastructure arena, ISVs will have “broader roles and responsibilities” to play as they will need to provide existing software and services via the platform-as-a-service (PaaS) model, in a more reliable and secure way.

These vendors would then need to enhance their hosting capabilities and overall technical support services, Dharmastira said.

To better handle cloud-based requests, the analyst added that ISVs will also look to acquire smaller companies that already have SaaS offerings.

Asheesh Raina, principal research analyst for software at Gartner, noted that in the application layer, ISVs will also need to consider which server platforms they want to develop for. Their choice will determine the level of optimization and the type of drivers needed to run the apps, Raina said in an e-mail interview.

Cloud race heats up
Tan Jee Toon, country manager for IBM Singapore’s software group, said the need to ensure the availability of services–spanning across the stack from infrastructure to applications–will underscore the maturing cloud computing model as a “true example” of service-oriented architecture (SOA), an application delivery framework which was heavily touted a few years back.

“Cloud is SOA at the systems level,” Tan explained in an e-mail. “The fundamentals are the same [except] that it shifts the level of abstraction to a few levels lower down the stack. The services in the new environment are provided by cloud are now meant for infrastructure consumption [too], as compared to application consumption only in traditional models of SOA.”

Several top IT vendors are already placing their bets on the advancement of cloud computing, including Microsoft. Michael Gambier, the software giant’s Asia-Pacific general manager for server and tools, told ZDNet Asia in an e-mail that the market is “shifting increasingly toward IT-as-a-service”, and this encompasses infrastructure-as-a-service (IaaS), PaaS and SaaS.

“We continue to invest in the cloud because we see it as an enormous area for growth.

“[As we build out our capabilities], we can offer customers the full range of our offerings, whether it is an on- or off-premise environment. We’re also working with customers to help them understand what makes sense for them to move to the cloud,” Gambier said.

He revealed that Redmond last March dedicated 70 percent of its engineers to work on cloud-based offerings. This year, that figure is expected to grow to around 90 percent.

Such efforts bode well in the eyes of Tim Sheedy, senior analyst and advisor of Forrester Research’s CIO group.

In a phone interview, Sheedy identified Redmond as the frontrunner among top software vendors such as IBM, Oracle and Hewlett-Packard (HP) to embrace cloud computing. He pointed to Microsoft’s Azure and Dynamics CRM Online offerings as examples of the company’s push toward the cloud space.

“It knows that it is in trouble with Google challenging its desktop operating system and Office productivity suite, and that it has to reinvent in order to stay competitive,” the Forrester analyst said. “I expect to see deep cloud connectivity in future Windows OS-based products.”

Microsoft’s cloud rivals are not sitting idle either.

Oracle’s vice president of Fusion Middleware, Chin Ying Loong, said in his e-mail that enterprises, particularly top major companies, are evolving their current IT infrastructure to become more “cloud-like”.

These businesses, Chin said added, are looking to improve internal services to various business units in order to “provide greater agility and responsiveness to business needs, higher quality of service in terms of latency and availability, lower costs and boost utilization”.

Moving forward, he said middleware such as Oracle Fusion Middleware–a set of tools touted to help customers build flexible infrastructure for their organizations to be more agile–that can offer services securely and enable strong governance, will continue to be a key focus in 2011.

Mobility in demand
Other vendors are leveraging the cloud to deliver their services to the increasing number of devices used by the mobile workforce.

SAP, for instance, is hedging its bets on mobility and real-time analytics, which it said are technologies customers ask for today. Steve Watts, president of SAP Asia-Pacific and Japan, said the software vendor has been steering its direction toward these two arenas for the past nine months.

Enterprise mobility, in particular, is “one of the fastest growing areas” in the business space, Watts told ZDNet Asia, adding that SAP aims to have 1 billion mobile workers running on SAP software.

“Mobility will become a key enabler for our customers, from managing industrial process flows across borders to managing people locally, and it will change how organizations operate,” he said.

SAP BusinessObjects Explorer for Apple’s iPhone, for example, is a business intelligence (BI) tool that he said caters to mobile workers looking for quick, on-the-move access to business-critical information which was previously only available on stationary devices.

Paul Schroeter, strategic marketing manager of software at HP Asia-Pacific and Japan, agreed that the importance of mobility is “coming in by stealth” as workers want flexibility for IT resources.

To address this, Schroeter said HP will continue to play strongly in the application lifecycle management (ALM) space, particularly in ensuring the security of apps deployed within the enterprise.

He pointed to the acquisitions of Arcsight and Fortify, as well as the release of its ALM 2011 suite in December last year, as evidence of HP’s focus on the development, deployment and maintenance of enterprise apps.

Non-enterprise competition
According to Forrester’s Sheedy, the demand for increased enterprise mobility will open doors to non-enterprise vendors to muscle into the core industry. The analyst cited Apple and Google as prime contenders.

New software delivery models such as through native app stores and Web sites are proving to be a “disruptive force” in the enterprise space, he said, noting that mobile operating system (OS) players such as Apple with its iOS and Google with the Android–the two dominant forces in this market–have the opportunity to enter the space.

These players could also act as SIs to merge enterprise apps published on the iOS or Android platforms into companies’ backend IT systems, Sheedy suggested.

With Android-based smartphones and Apple’s iPhones and iPads increasingly becoming employees’ most-used devices to access business data, this development might open new revenue streams for the mobile platform players, he added.

“Whether they will do so is another matter, but I do see that these two companies will take revenue away from established software vendors like IBM, Microsoft and Oracle,” the analyst said. He added, though, that the incumbents will not lose major IT deals just yet, as the mobility trend is still in its infancy stages.

And it seems Google, for one, is looking to beef up its play for the enterprise space which the vendor already services via its Google Apps e-mail and productivity suite. Just last week, the Internet giant announced that it is removing the downtime clause from customers’ service level agreements (SLAs) for its Google Apps service as part of efforts to differentiate itself from competitors and entice new firms to sign up for its cloud-based services.

Security, mobile and cloud hit S’pore IT courses

In keeping with “hot” technology trends particularly in mobile, security and cloud computing, a number of schools in Singapore have introduced new courses or revamped existing curricula to groom a workforce ready for the new demands of the IT sector.

The School of Informatics and IT (IIT) at Temasek Polytechnic (TP), for example, recently rolled out a new Diploma in Digital Forensics to cater to the rising demand for IT security professionals with the skills to investigate crimes committed using computers and digital devices. The first batch of students will join the course this April.

In an e-mail interview with ZDNet Asia, course manager Mandy Mak explained that digital forensics involves the scientific analysis of evidence from sources as computers, cell phones and computer networks to prosecute those who have hacked into the computers and information systems of organizations.

The landscape of IT security, noted Mak, is ever changing. While ensuring the security of information systems remains an imperative for corporations, there is a growing need to respond to and investigate security threats and incidents due to the pervasive use of digital and mobile devices in society, she pointed out.

“The increasing concern over data breaches, fraud, insider trading and [other] crimes using digital devices has led to a need for digital forensic experts who can gather evidence and trace how a crime has been carried out,” Mak elaborated.

Over at TP’s School of Engineering, the Diploma in Infocomm & Network Engineering has been tweaked. Formerly called the Diploma in Info-Communications, course manager Yin Choon Meng said the change was to more accurately reflect the focus of the course curriculum and the competencies of the graduates, especially in the area of network and communications engineering.

Yin highlighted in an e-mail that besides the technical foundation of information, network and communications engineering, students under the program are also exposed to social media, network security and cloud computing. This is to provide students the insight into complete ICT ecosystems and hence equip them with the capabilities to flourish in the IT, networking and communications industries, he noted.

According to Yin, companies are increasingly making use of new media channels and cloud computing, which give rise to concerns about network security. In addition, the proliferation of smartphones and tablets and the introduction of Singapore’s next-generation national broadband network are giving rise to new business offerings and new ways rich media can be delivered, he said.

Academia keep watchful eye on industry
Mak, who is also the deputy director of Technology & Academic Computing, said that the IIT faculty keeps a close watch on tech trends including virtualization and “inevitably covers such topics in some of the existing subjects we teach”.

Mobile applications are also a “hot tech trend”, she noted, adding that the Diploma in Mobile & Network Services offered by the IIT, has become increasingly popular with students. She attributed this to the “interest in creating mobile apps” for platforms including the iOS and Android.

The growing trend of mobile communications also popped up as a brand new course at the Institute of Technical Education (ITE). An ITE spokesperson replied in an e-mail that the high penetration rate of smartphone users in Singapore made it “timely” for ITE to launch a new Mobile Systems and Services certification course. The program is designed to produce a “new breed of mobile systems support associates who are well-versed in mobile network infrastructure and capable of developing mobile applications”, she explained.

At the Nanyang Technological University’s School of Computer Engineering (SCE), the curriculum is reviewed and tweaked annually to incorporate what it deems as “sustained IT and industry developments as opposed to short-lived fads”, according to Professor Thambipillai Srikanthan, who chairs the SCE.

In an e-mail interview, he explained a course update can range from revising existing syllabi to keep up with new technology and industry advances such as programming languages like HTML 5, to introducing new electives.

Srikanthan said the SCE recently introduced a host of new final-year electives as part of its revamped curriculum, which include cloud computing and its related applications, augmented and virtual reality, and data analytics and mining.

Trends influence, not dictate
While tech trends do play a crucial part in the planning of IT courses, they do not dictate the entire curriculum, Srikanthan emphasized. The curriculum not only has to train graduates for current times, but more importantly, to prepare them to adapt to the rapidly changing IT technologies, he pointed out.

The SCE curriculum, for example, is carefully designed to achieve a balance between the fundamentals and technologies so that the students’ skills do not become obsolete by the time they graduate, said Srikanthan. “It is the fundamentals which will serve as a bedrock to allow the graduates to remain versatile and adapt to the evolving technology developments.”

Benjamin Cavender, principal analyst at China Market Research Group, concurred. He noted there is a need for courses and students to stay current as standards are changing extremely quickly and the development time for new technologies is shortening.

“It’s definitely important that the curriculum focusing on current and emerging trends, but it’s important that information is presented in a way that encourages students to stay current throughout their careers,” he said. “In that sense, learning how to learn becomes more important than what they learn.”

Dual-core to boost smartphone multimedia

With the arrival of dual-core smartphones, consumers can expect better multimedia experience while enterprise users stand to benefit from boosted productivity apps and video quality, say industry players.

At the Consumer Electronics Show earlier this month, phonemakers Motorola and LG announced that they will be launching dual-core smartphones this year. Motorola plans to launch two handsets–Atrix and Droid Bionic–while LG is releasing the Optimus 2X. Reports noted that other device manufacturers will likely follow suit.

In a phone interview with ZDNet Asia, T.Y. Lau, senior analyst at Canalys, highlighted that as dual-core mobile devices are not yet out in the mass market, most of the benefits of dual-core smartphones are based on what is advertised by the manufacturers. LG launched its first dual-core Optimus 2X early this week, but only in its homeland South Korea, she noted.

According to Lau, mobile manufacturers are touting better multimedia capabilities in dual-core smartphones. Video and audio quality will improve on such handsets, she said, adding that the function will be important for consumers when high-definition (HD) content is available. Consumers will also be able to enjoy smoother gameplay for console-styled games or even 3D games, she noted.

Patrick Fong, product manager of mobile communications at LG Electronics, concurred. In an e-mail, Fong said the Optimus 2X allows users to view and record videos in full HD 1080p as well as play graphically intensive games.

Canalys’ Lau noted that the boosted multimedia capabilities might be able to push the growth of video in the enterprise. Networking company Cisco Systems, she said, has been pushing the concept of “video as the next voice” and dual-core smartphones may be able to make that vision a reality.

Aside from multimedia applications, Lau noted that dual-core can bring other benefits such as faster Web browsing experience, a more responsive touchscreen, and improved multitasking. Enterprise users can also benefit from better enterprise applications such as customer relations management or business intelligence tools, said Lau.

Contrary to belief that a smartphone needs more energy to power two cores, Lau said power consumption for dual-core phones is reduced. She explained that processing workload can be shared between the two cores while a single core chip might be overloaded.

According to Qualcomm’s president for Southeast Asia and the Pacific John Stefanac, the company’s dual-CPU cores are asynchronous or able to operate at independent voltages and frequencies. This enables “finer power control and optimal performance at low power”, he explained in an e-mail interview.

Qualcomm was slated to release Snapdragon, its dual-core chip for smartphones, last year but has since indicated the launch will take place this year.

Two not for mainstream, yet
Stefanac told ZDNet Asia dual-core smartphones will be targeted initially at the high-end segment.

LG, said Fong, is labeling the Optimus 2X as a “super smartphone” and will be geared toward early adopter power users. The phone will be available in Singapore at the end of the first quarter, he added.

A Motorola spokesperson was unable to indicate in his e-mail reply when the Motorola Atrix will be available in Asia. The Droid Bionic is exclusive to Verizon Wireless in the United States.

Asked if developing apps for dual-core smartphones will be more challenging, Qualcomm’s Stefanac noted that it should be similar to single core phones. LG’s Fong agreed, adding that developers will not have to worry about the performance of graphic-rich applications on the Optimus 2X.

“We are certain that the 2X presents for developers the opportunity to put new software in the market where previously there simply just wasn’t enough processing grunt to run these programs credibly,” said Fong.

Specialized talent to top IT manpower demand

Skilled IT professionals such as software developers and project managers may now find it easier to secure a job, thanks to the burgeoning regional economy.

Job recruitment specialists ZDNet Asia spoke to pointed out that demand for this group of trained talent is back in the Asia-Pacific region, following a “slowdown” during the economic downturn two years ago.

In an e-mail interview, Lim Der Shing, CEO of online portal Jobscentral, said that network administrators, social media specialists and consultants who can strategize and influence business performance can look forward to more job opportunities. He also revealed that the site has seen a 10 percent increase of IT-related jobs being posted, as compared to same period last year.

Similarly, Thomas Kok, senior program director at the National University of Singapore (NUS), noted in an e-mail that professionals plying the risk and control trade, such as IT auditors, risk managers and business continuity managers, will be more sought after by employers in 2011.

Kiran Sudarraja, practice leader for technology at PeopleSearch, gave further insight into how IT jobs will evolve, following the amalgamation of technology and business in today’s corporate world.

According to him, 2011 is a “growth year”, with emphasis on roles requiring creativity and innovation. These may involve business process improvisation, cost reduction and improving productivity, as well as new market expansion and staying ahead of competition.

Sudarraja added that specific areas where IT skilled talent are in high demand are virtualization, cloud computing, green IT, social media, mobile payments and e-commerce. Compliance, security and support functions for financial regulations are also spheres in which trained professionals are needed.

Pay revision on the way
Jobscentral’s Lim pointed out that the strong economic rebound has given businesses a boost, with many now looking to reward their workers who took pay cuts in 2009 and 2010.

“Unemployment is low in Southeast Asia and even lower for IT professionals. As such, it will increasingly become an employee’s market for 2011. These factors will naturally lead to rising wages in the form of increments and bonuses,” he said.

PeopleSearch’s Sudarraja put the increment figures at between 4 and 5 percent, which he said is the norm.

While the market may seem transparent and fluid for now, the rise in salaries will not be even across the board, noted NUS’ Kok. He stressed that the market may still prefer those with “good, relevant experience, and those with relevant qualification and certification”.

“Talent management, especially in the IT industry, is crucial in these strong market conditions, and we will see continued upward pressure on salaries,” said Kok. “However, there will be differentiators–selected professionals may experience a larger rise in their salaries.”

This sentiment was echoed by Sudarraja. Top performers, he explained in his e-mail, are “treated well”, and their remuneration and benefits package will largely be dependent on the forecasts for the year ahead as well as the previous year’s performance.

With the greater emphasis on skilled manpower, more professionals are also signing up for upgrading courses.

The Qualified Information Security Professional (QISP) program launched in Singapore six months ago, has seen exceptional response with enrolment “exceeding expectations”, according to Gerard Tan, the president of Association of Infocomm Security Professionals (AISP), which jointly developed the course with NUS’ Institute of Systems Science.

In a phone interview, Tan revealed that 125 students had signed up for the program to date, of whom “quite a lot” are not in the security field. The course was run thrice in the fourth quarter of 2010, with two runs scheduled for the first half of this year.

Training mindset not future-proof
Sudarraja noted however that in terms of training of workforce for future challenges, Asia including Singapore is still trapped at the “resource-driven” stage, instead of the ideal “result-driven” stage. This, he added, is hindering the region’s manpower to more effectively carry out IT tasks.

“Ideally, a result-driven organization would look at the business point of view but Asia as a whole and even Singapore has yet to get there,” he commented.”While academic training can only enhance one’s knowledge, it is on-the-job training that continues to hone the skills and prepare a better workforce.”

Jobcentral’s Lim remained positive nonetheless. According to him, Singapore’s large pool of university-trained engineers and active government interest in the IT industry will be adequate to meet future challenges.

The rising number of locally-owned technology successes “shows we have the manpower and business know-how and support systems to keep up with changes in the field”, he said.

Outsourcing to slow down
Lower manpower costs has seen Asia benefitting from IT offshoring for the last two decades, but industry observers pointed out that parameters have changed making the region less appealing as an outsourcing destination. In addition, MNCs may be less likely to offshore their functions en masse.

“[The decision to outsource] will depend on a multitude of factors, including the state of economies of Europe and the U.S. where most MNCs are headquartered, and whether there are planned expansions to focus on the growing Asia-Pacific markets,” noted NUS’ Kok.

Increased wages across India, China and Southeast Asia, as well as fluctuation between the greenback, the euro and Asia-Pacific currencies have put a slight dent on outsourcing, he added.

Similarly, PeopleSearch’s Sudarraja highlighted that companies continue to looking for cost-effective locations, and jobs may eventually flow to emerging countries such as Egypt, Brazil and Africa. According to him, the Asia-Pacific region will move toward self-sufficiency and cater to the region within.

“Outsourcing will continue for now but will slow down as countries all the over the world look at inflation and jobless rates, while policy makers will try to woo other economies to invest in its  market but limit its own [entities] from outsourcing,” he noted.

Jobscentral’s Lim argued however, that MNCs that did not make big investments during the recession years are now sitting on large cash positions and will look to expand this year. “This means that plans for outsourcing of IT services will resume and the traditional outsourcing centers of India, Philippines and Singapore will benefit,” he concluded.

Key open source security benefits

Discussions of the relative security benefits of an open source development model–like comparative discussions in any realm–all too often revolve around only one factor at a time. Such discussions tend to get so caught up in their own intricacies that by their ends nobody is looking at the big picture any longer, and any value such discussions might have had has already evaporated.

When trying to engage in a truly productive exchange of ideas, it is helpful to keep in mind the fact that when something is worth doing, it is usually worth doing for more than one reason. This applies to the security benefits of an open source development model, as it does to other topics of discussion. A small number of such factors behind the security benefits of open source development are examined here:

The Many Eyes Theory
Probably the most common and obvious scratching post in online discussions of open source security is the so-called “many eyes” theory of software security. The simple version is often articulated by the statement that given enough eyeballs, all bugs are shallow. The most common retort is that open source means that more malicious eyeballs can see your bugs, too.

Of course, this counterargument is predicated upon a generally false assumption, that bugs are typically found by looking at source code. The truth is that bugs are found by mistreating software and observing how it fails, by reverse engineering it, and by a user simply going about his or her business until discovering that a program has done something like delete all of the previous hour’s work.

This theory of improved security is no true guarantee of practical security benefits, even if the most common counterarguments against it are mostly full of hot air, though. Possibly the most difficult counterargument to dismiss effectively, despite its fallacious reasoning, is the simple statement that the open source “many eyes” theory of software security does not work because it provides no guarantees. It is difficult to dismiss because it is true that no such guarantee exists. That difficulty is awfully frustrating because many people who make such arguments, and presumably many of those who listen to them, completely overlook the fact that it does not have to be a guarantee to be a benefit. All it needs is to be an increased likelihood of security, or even just increased opportunity without a counterbalancing problem.

The “Not Microsoft” Theory
Microsoft is widely recognized as a symbol of poor software security. Generations of computer users have essentially grown up experiencing the security issues that make such a reputation so well deserved. The fact that MS Windows 95, 98, and ME all failed to even do something so simple as maintain memory space separation is the kind of gross, dangerous oversight in the design of a system that can permanently tarnish a reputation. The simple fact that your software does not come from Microsoft lends it an air of at least a little legitimacy amongst some people, because while that does not prove it is secure, it at least suggests it may not share the traditional problems of MS software security.

Microsoft has launched a number of initiatives over recent years to try to rehabilitate that reputation, of course. Its successes in this area are owed to the fact that more money has been spent advertising a greater focus on security than on any actual security focusing efforts themselves, but meaningful changes have been made in the way Microsoft produces software in attempts to improve the technical security of that software in addition to the copious quantities of marketing dollars spent on apparent security. These days it is, for many people, not sufficient for purposes of making people think your software is secure to merely say, “This is not software from Microsoft.” If you want to impress people, you have to explain how it is secure, and not merely that it is not software from some vendor well-known for its past security failings.

Even so, pointing out that Microsoft was not involved in your software development process can still carry some weight with at least some readers or listeners. Microsoft is still going through some growing pains on its way toward producing more secure software, and internal conflicts between secure design and other (less technical) concerns for the commercial viability of its software offerings still present major hurdles to improving software security. Just be aware that to effectively use this argument you will probably need to be able to back it up with current, relevant explanations of the security problems that still lurk in the software development processes of this industry giant.

The Transparency Theory
Possibly the most unassailable security argument for open source software development is that of transparency. Because the source code is open, and because (especially in the case of very popular projects) many people are motivated to sift through the source code of open source software projects for a variety of reasons, that source code is likely to be seen by a great many people. Apart from the notion that bugs become “shallow” when enough eyeballs scrutinize the software, those eyeballs also provide some discouragement for those who might try to sneak malicious–or at least dubious–functionality into the design of a software system.

The most obvious and immediate counterexample is probably the OpenBSD project’s 2010 scandal over a claim that its IPsec implementation contained an FBI “backdoor”. The fact of the matter is that this claim is most likely false, whether the person making the claim knows it or not; a number of developers have set out to analyze the design of the system and find such backdoors if they exist, and come up empty-handed. Even if the claim proved true, however, it would not invalidate this theory of improved security for open source software.

The fact of the matter is that the quick announcement of the claim by the OpenBSD project founder, Theo de Raadt, illustrated the effects of open source software development as a motivator for being honest and up-front with the public about security matters. By contrast, the majority of large corporate software vendors would have been more inclined to sweep such claims under the rug and, even if they proved true, try to keep such knowledge out of the hands of users for fear it might affect sales. There is little motivation to share such issues when it might damage sales figures in cases where the closed source development process (and development employees who have signed NDAs) ensures a very low likelihood of outsiders stumbling across such vulnerabilities independently.

The Unix Theory
The Unix style of operating system (and other software) design provides substantial benefits for security over many other approaches to software design. Basic (but complete) privilege separation, modularity, and decades of testing under fire are among the many reasons Unix-like operating systems often provide greater security benefits than competing OSes.

While this argument stands up well for certain specific pieces of software or user environments, it is not universally applicable. Open source operating systems like Haiku and Plan 9 are not very Unix-like and, while they may be very well designed systems with strong security characteristics, discussing the security benefits of Unix does not address these systems’ benefits as open source software. More to the point, there are closed source Unix-like systems that offer much the same benefits. Some other open source software is also not very Unix-like, such as the Mozilla Firefox browser and the Pidgin multiprotocol IM client, both of which take a monolithic, “feature rich” approach to software design that stands in marked contrast to the Unix approach of designing programs to do one thing, do it well, and interface easily with other programs that do other things.

For those pieces of open source software that do conform to the expectations of Unix, however, this argument is alive and well, and quite valid. The extent to which tools like cat and grep have grown out of control in some implementations and drifted away from the Unix philosophy of software design is troubling to some, but the tenets of that philosophy are still visible in the basic design of these tools. Simplicity, clarity, and care in the design of software is a pleasant benefit that arises in part from such an approach to software development.

Breadth of knowledge
The important thing in considering such matters is to be aware that circumstances are more complex than a single, pithy statement about the security of open source software. Several arguments are relevant to discussions of the security benefits of open source development, including not only those listed above but others as well. Do not neglect all but one, and get yourself backed into the dead-end of a merely semantic argument relating to that one single security benefit of open source software development. Do not put all your eggs in that single basket when selecting software for your use, either. Seek out, and consider, other potential arguments, not only for discussions with others who might disagree with your analysis, but also because you need to know something about the major arguments to make an informed decision about what software to use and how to use it in the most secure manner.

Finally, do not make the mistake of making–or being taken in by–the Invulnerability Theory. Some have claimed that certain open source software, especially including Linux in general or Ubuntu Linux in particular, is impervious to security exploits of any kind. Such claims are patently false, and in fact quite obviously ridiculous. Linux is not the most secure operating system, and neither is anything else, regardless of development model.

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

How to reduce the Group Policy refresh interval

Group Policy is a great way to deploy settings to users and computers centrally–unless you wind up waiting for the updates.

The default interval to update the Group Policy to a computer account is 90 minutes, with a further offset of 0-30 minutes. While this schedule is fine for most situations, there may be times when you need to make it shorter for quick updates.

There are various ways to shorten the Group Policy refresh interval. But be careful when you make these changes because it will increase the traffic from domain controllers to computer accounts.

One approach is to have the server computer accounts receive a tighter refresh policy, with the assumption that there are fewer servers than client computers.

The refresh interval is defined in Group Policy in the Policies | Administrative Templates | System | Group Policy section in a value called Group Policy Refresh For Computers (Figure A). After the Group Policy Refresh For Computers value is selected, it is represented in minutes that will determine how frequently the computer accounts will try to update the policy.

Figure A

Another option is the offset labeled Random Time Added. The offset is important because it ensures that the domain controllers don’t get perpetually bamboozled with request for updates. Figure B has a tightened value for the update refresh interval.

Figure B

A good approach is to tighten the update interval when a number of frequent changes need to be deployed, such as after a move or a major system update. But consider whether a tighter interval is needed, especially because in most cases the updates do not retrieve a new configuration for the computer account.

On the other hand, large environments may want to make this interval much larger when thousands of computer accounts may be in use.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

BPO to take Philippines to higher ground

Having toppled outsourcing giant India in the call center market last year, the Philippine ICT industry is aiming to level up further this year as the government and the private sector team up to set ambitious revenue goals and draft long-term programs.

According to analyst firm XMG Global, 2011 should be a positive year for the country’s IT market with overall IT spending estimated to grow 11 percent to US$3 billion.

The BPO (business process outsourcing) industry, one of the country’s main revenue earners, is again leading the charge. Industry group Business Processing Association of the Philippines (BPAP) is targeting to hit US$11 billion in revenues in 2011, a 20 percent increase from estimated US$9 billion in 2010.

The Philippines last year dethroned India as the global call center hub, hitting US$5.7 billion in revenues against India’s US$5.5 billion and employing more call center workers than the former leader.

BPAP and the Philippine Commission on ICT (CICT) are projecting that the industry could create an additional 84,000 jobs this year, bringing the total number of IT-BPO workers in the country to 610,000.

The figures tally with XMG’s estimated forecast which revealed that the growth of talent employed in offshore services for 2011 will reach 651,425. “One of 12 employed professionals in Metro Manila will be working either in BPO, call center or IT services,” said Phil Hall, principal analyst at the research firm.

BPAP’s chief, Oscar Sañez, said in an earlier statement that aggressive marketing, both locally and internationally, will be key for the Philippines to achieve the US$11 billion-revenue goal. “We have to increase the awareness of our potential employees of job opportunities in IT and BPO companies, including those in the knowledge process outsourcing and other non-voice sectors. We also have to improve our visibility internationally to market new services in new territories,” he said.

Sañez also noted that President Benigno “Noynoy” Aquino III in December 2010 had pledged to allocate 62 million pesos (US$1.4 million) as “BPO promotions fund”, adding that the amount would help the industry achieve this year’s revenue goal.

Focus to include broadband, digital TV
For the CICT, the government agency in charge of the local ICT market, boosting the BPO sector is just one part of the “digital strategy” which spans 2011 to 2016.

Ivan John Uy, who heads the agency, told ZDNet Asia in an interview that the five-year plan–which will be launched soon–aims to enhance the country’s software, telecoms, e-government and postal sectors.

This year, the CICT is coordinating with various academic and non-formal education institutions to “re-tool” jobless college graduates and youths, Uy said. “For instance, our nursing graduates who don’t have work yet can be trained to become medical transcriptionists and healthcare support specialists,” he said.

He cautioned that the country has to quickly replenish or augment its BPO manpower base. “We run the risk of [skills] shortage. We have lots of jobless people around but they don’t have the skills,” Uy said, adding that the government is also looking at the possibility of offering a “study now, pay later” scheme for the unemployed.

The CICT this year will also be preparing for the country’s migration to digital TV, he said, elaborating on plans for the local telecoms sector.

Last year, the National Telecommunications Commission (NTC)–which operates under the CICT–selected a Japanese digital TV standard which broadcast companies must adopt by 2015.

Uy explained: “As part of our preparations, I’ve directed the NTC to organize a technical working group to draft, within the year, implementation rules and regulations (IRR) which would also serve as a guideline for broadcast companies.”

Touching on the private telecoms sector, XMG said increased competition will force mobile operators to roll out better pricing, especially with regard to data plans and long-distance charges.

“[Leading] service providers will be those that can leverage their wireless and extended bandwidth capabilities,” noted Hall, who is based in Manila. “Price, services and local content provisioning will be the dominant lure and battleground, as social networking continues to grow dramatically and cuts into the SMS market.”

In the broadband space, the XMG analyst said subscriber base growth will be propelled by intense competition which will push down the prices of entry-level packages.

“Competition is increasing between fixed-line, cable and mobile providers,” he said. “Among telco giants PLDT Smart and Globe Broadband, subscribers will continually lag behind cellular subscription. However, broadband will continue to remain [these operators’] growth area [and see] double-digit growth, making it an important revenue stream for all carriers.”

Given the increasing demographics of Filipinos who are clamoring for better quality of service and pricing both at home and on-the-move, consumers are unlikely to stick to a single provider when buying broadband services, he noted.

“[To lead the market], service providers will need to develop loyalty programs and provide attractive pricing and bundling schemes,” Hall said. “Internet TV will not take traction in the Philippines yet, but we foresee tie-ups between TV content providers and Internet for on-demand replay of shows. Watch for PLDT Group’s TV5 as they strategically evolve to become the natural fit to take on this leadership role.”

An interoperable government
Turning to e-government initiatives, Uy said the CICT will be pushing for interconnectivity and interoperability between IT systems deployed across different government agencies.

“Each agency has its own GIS (government information system) and data center which do not talk to each other. ICT adoption in the government is extremely low and fragmented,” he revealed.

XMG, though, is not expecting any major leaps in this area this year. Hall said: “However, if the new Aquino government follows its stated plan, we anticipate a slow progression from more use of IT in government departments to true e-government applications during this presidential term.”

With regard to the country’s postal service, which falls under the domain of the CICT, Uy said reforms are underway to transform post offices across the Philippines into self-sustaining community e-centers.

He noted that the Philippine Postal incurred losses totaling 300 million pesos (US$6.8 million) last year. “We need to fix this and install a new business model.”

Beyond the government sector, XMG said the Philippines can expect to see IT developments in other areas including social networking, consumer electronics, green IT, cloud computing and software development.

According to Hall, social networking activities in the country will see continued growth through 2011 and beyond. “Facebook and Twitter are taking market [share] from SMS,” he said. “Like e-mail and the mobile phone before, these are culture-changing products and we have not seen their full potential yet.”

“Expect more developments for use of social networking in business, but also expect higher levels of advertising, spam or its equivalent, viruses and other intrusions,” he added.

XMG also expects tablets to claim its ascendancy in the gadget race.

“Most major manufacturers are due to release their first models in first-quarter 2011, while Apple is due to announce iPad 2,” Hall said. “With the rise of the middle-class and tech-savvy Gen X and Gen Y Filipinos, expect to see these gadgets in local coffee shop. With a wide range of devices and operating systems, there will be no leader but expect Apple to remain strong, followed by Samsung and RIM.”

Elaborating on cloud adoption, the XMG analyst said IT vendors are expected to grow their enterprise offerings through the public cloud. “However, we do not anticipate well-established companies with significant investment in IT to [migrate] their ERP systems or legacy applications just yet in 2011,” Hall noted.

He said enterprises will need new software that built to be deployed on the cloud as legacy systems are not designed for such implementation.

He also pointed to green IT as a growth area for the Philippines as high utility costs in the country make a good case for the deployment of energy-efficient hardware and virtualized servers.

“The adoption of green IT practices will increase, albeit slowly, over the next 12 months primarily due to newer hardware refreshes,” Hall said. “Unlike other green-conscious economies such as Singapore and Korea, businesses and industries in the Philippines must still collectively make a commitment to saving the environment and reducing carbon emission footprint generated by technology.”

Melvin G. Calimag is a freelance IT writer based in the Philippines.

India 2011 basks in ‘solidification’ of 2010

India will see major IT trends from 2010 such as green IT and cloud computing continue to gather momentum amid optimism that the 2011 will bring new innovation and growth, industry players say.

“We are optimistic about 2011,” Sudip Nandy, CEO of communications technology vendor Aricent, told ZDNet Asia in an e-mail. As the macroeconomic environment further improves, Nandy expressed hopes to see significant spending on innovation and new applications of technology.

Ananadan Jayaraman, chief product and marketing officer at Connectiva Systems, concurred. “It will be a year of rapid growth for the business with significant activity in emerging markets, particularly, India, Southeast Asia, Eastern Europe and Latin America.”

“We believe the U.S. market will continue to be soft and will take longer to return to robust growth,” Jayaraman added. He noted that Connectiva, a revenue management software vendor, expects customers to be increasingly demanding and to expect vendors to take full responsibility for business outcomes and work with them on risk-reward models.

Surajit Sen, director of channels, marketing and alliances for NetApp India, said: “We’ll see the same economic conditions and the same major IT themes in 2011. It will be a year of solidification and increased adoption of some key trends that began in 2009 to 2010.”

For instance, Sen noted, most, if not all, companies would have adopted a “virtualize first” policy for new applications.

Green IT is also likely to gather momentum with businesses in India continuing to energy-efficient technologies to reduce costs and provide various environmental benefits.

“This trend will grow further in 2011, alongside increased use of business efficiency solutions and asset and infrastructure consolidation,” said Vipin Tuteja, executive director of marketing and international business, Xerox India. He also expects businesses to develop more collaborative work environments which seek to optimize the use of cloud.

Sen added: “There will be even more talk about cloud IT services, though, buyers are still cautious. There will be a lot of talk about hybrid clouds.”

He noted that over the last couple of years, Indian IT companies also have begun to explore opportunities in markets such as Mexico, Ireland, Netherlands, Philippines and Brazil. This trend will continue in 2011 as companies continue to diversify their business from core markets such as the United States and United Kingdom.

According to Dun & Bradstreet (D&B), service providers are expected to sharpen their focus on India’s domestic market to tap imminent growth opportunities offered by the country’s booming economy.

“The rapid growth in the domestic market is likely to be driven by major government initiatives such as increased spending on e-government and increased thrust on technology adoption, and upgrades across various government departments to bridge the digital divide,” the D&B statement said.

The business research firm noted that the Indian IT-BPO (business process outsourcing) industry is expected to adopt the inorganic growth route in order to widen its service offerings and enter new geographical markets.

It added that several third-party and captive BPO units are likely to acquire small companies to ramp up revenue, acquire clients and expand business segments and geographical reach. “Consolidation will also be driven by international M&A (merger and acquisition) deals, propelled by robustness of the Indian players,” it said.

Growth driven by 3G, BWA
According to research firm IDC, the launch of 3G and BWA services is expected to boost the demand for more gadgets across India.

Sumanta Mukherjee, lead PC analyst at IDC India, said the PC market this year will be redefined by the introduction of 3G services and service bundling with existing and new PC form-factors, increase in functionalities in mini-noteboook PCs and wider adoption of IT in the education sector.

Aricent’s Nandy said: “I am very upbeat about communications since operators are rapidly adopting technologies like LTE, and devices and application vendors are constantly competing to deliver compelling user experiences to the consumers.”

Jayaraman also pointed to telecommunications, media and entertainment to provide continued growth, driven by innovation in mobile, tablets and on-demand video. “Utilities is another segment where we expect to see increased IT investments driven by smart grids,” he said. “We also expect banking and insurance sectors to come back very strongly this year.

End of tax holiday may hit firms
However, the uncertainty over whether tax holiday will be extended after March 2011 could slow down expansion plans of several Indian IT companies.

D&B said: “Large companies would be able to alleviate the tax burden arising from the expiry of tax holiday by moving into SEZs (special economic zones). However, small companies, which form the bulk of the companies registered with STPI (Software Technology Parks of India, will find it hard to survive as they are still struggling post-global recession and do not have the financial resources to face this challenge.”

Swati Prasad is a freelance IT writer based in India.

Malaysia looks to higher ICT spend

Malaysia’s ICT spending is expected to rise this year and growth will be driven by several emerging technological trends, say industry watchers.

In its annual ICT predictions, research firm IDC noted that IT spending in the country will grow by 9 percent from US$5.9 billion in 2010 to US$6.5 billion this year. Spending in the telecommunication sector is expected to hit US$7.3 billion in 2011, up 5.3 percent from 2010.

Roger Ling, research manager for IDC Asean, said more changes are expected in the local ICT market that will drive this year. The total IT spending for Malaysia, driven mainly by purchases of hardware and packaged software, grew 6 percent in 2010, he added.

Ling said: “The growth in IT spending in 2011 is expected to be driven by factors such as the government’s continued efforts to increase the level of broadband penetration, and outsourcing initiatives by organizations looking to address the increased IT complexity.” Other factors include the continued adoption of system infrastructure software to operate and manage computing resources, he added.

In its annual prediction, Frost & Sullivan pointed to wireless broadband and cloud computing as two growth areas in the local ICT sector.

“The wireless broadband subscriber base overtook its fixed counter in 2010 and we expect this trend to accentuate leading to, among other things, the increased demand for smartphones and more competition among wireless players,” said Nitin Bhat, Asia-Pacific partner and vice president for ICT Practice, Frost & Sullivan.

In an e-mail interview, he noted that cloud will gain significant traction this year, driven by the “twin factors of supply-side maturity and demand-side understanding”. “We see a high propensity of trials, and some transactional-based cloud computing adoption among enterprises in Malaysia,” Bhat said.

Talking cloud
According to Ananth Lazarus, managing director of Microsoft Malaysia, IT investments in both the private and public sector will shift toward the cloud this year, driven by two key factors.

The first, he said, is business needs. Second, Lazarus said the government’s transformation programs will see key projects taking off this year.

“The promises of the cloud are applicable [to these programs]. Reducing costs, providing flexibility and agility in how organizations use their IT resources, ease of adoption and implementation, and not least, allowing organizations to explore and develop innovative services with a low cost of entry is what the cloud can do,” he said in an e-mail interview.

Customers will also start to explore the tradeoffs between private and public cloud offerings. Large enterprises that have been testing the waters will begin more earnest deployments and will aggressively look at building their own private clouds, he noted.

Early adopters will serve as proof-points and best practices will encourage cloud adoption among small and midsize, he said, adding that government agencies would initiate discussions on key issues such as data sovereignty and public policy.

Johnson Khoo, managing director of Hitachi Data System (HDS), noted that businesses this year, in particular, will start looking at new investments in IT infrastructure and services, such as data centers, while continuing to focus on keeping costs low and maximizing their existing IT investments.

In an e-mail interview, Khoo noted that with the announcement under the government’s Economic Transformation Program (ETP), Malaysia is seeking to be a world-class hub for data centers in the region. The ETP is designed to boost the country into a high-income nation by doubling its per capita income to US$15,000 by 2020. A bulk of the program involves infrastructure-driven projects such as the Mass Rapid Transport system due to kick off in July.

He said HDS expects to see growing interest in data center infrastructure and related services such as co-location and Web-hosting, managed networks, disaster recovery and other outsourcing services.

Skilled workers needed
Khoo, however, cautioned that Malaysia still lacked a skilled and knowledgeable workforce to complement these infrastructure investments. He noted that the country faces a shortage in human capital with skills that are particularly crucial for the ICT industry.

“Malaysia is globally recognized as a profitable regional hub for shared-services activities,” he said. “It is vital that both the government and [industry] intensify efforts to address this to remain competitive against our neighbors in the region.”

Yuri Wahab, country general manager for Dell, concurred. “More initiatives such as the newly established Talent Corporation aimed at attracting human capital, including Malaysians working overseas, are vital to ensure the nation’s talent pool grows and that our knowledge workers contribute positively toward the development of the country,” Wahab said.

He also expressed enthusiasm for Malaysia’s ICT industry, pointing to the rollout of the country’s high-speed broadband initiative. He added that it will promote greater digital inclusion, which is a key contributor to economic growth.

“This would allow more Malaysians and local entrepreneurs to connect to and participate in an increasingly global and borderless economy… We believe that this will also drive ICT consumption in the country,” he said.

Edwin Yapp is a freelance IT writer based in Malaysia.

Tweak your Ubuntu with Ubuntu Tweak

Do you remember those days when every Windows user worth their salt installed TweakUI, in order to get as much tweaking and configuring as they could out of their PC? That tool really did a lot for the Windows OS and, believe it or not, there is a similar tool for Ubuntu. That tool? Ubuntu Tweak.

Ubuntu Tweak allows you to dig into configurations you may not have even known about…and do so with ease. That’s right, there’s very little “magic” or obfuscation involved with this tool…it’s just straight-up configuration options that might have otherwise been hidden (or at least not as easy to find). With Ubuntu Tweak you can:

  • Update your system.
  • Add sources for packages.
  • Change startup settings.
  • Configure numerous hidden desktop settings (including desktop backup and recovery).
  • Set up default folder locations.
  • Manage scripts and shortcuts.
  • Gather system information.
  • Manage file types and Nautilus settings.
  • Configure power manager settings.
  • Manage security settings.

So, how does it work? How is it installed? Let’s take a look.

Installation
You won’t find Ubuntu Tweak in the Ubuntu Software Center. Instead you need to download the .deb package and install it manually (or let your browser open up the USC for the installation). I prefer the manual method, so that is what I will demonstrate.

Download the most recent .deb package from the Ubuntu Tweak main page. Once you have that file downloaded, follow these steps:

  1. Open up a terminal window.
  2. Change into the directory holding the newly downloaded .deb file.
  3. Issue the command sudo dpkg -i install ubuntu-tweak-XXX.deb Where XXX is the release number.
  4. Type your sudo password and hit Enter.
  5. Allow the package to install and then, when it is finished, close the terminal window.

Usage
To start up Ubuntu Tweak click on Applications | System Tools | Ubuntu Tweak. When you first start up the tool, it will give you a warning that you should enable the Ubuntu Tweak stable repository. Click OK to do this. Once that warning is out of the way, you can dig into the tweaking of your Ubuntu OS.

Figure A
Figure 1

Click image to enlarge.

The interface for Ubuntu Tweak is very well done (see Figure A). As you can see, the left pane is broken into categories: Applications, Startup, Desktop, Personal, and System. Some of these tweaks will require the use of sudo and some will not (depending on the nature of the configuration).

One very handy configuration in the Personal section is Templates. Here you can drag and drop files into the main window and those files will then be added as document templates.

From an admin standpoint, a very handy option is the Login Settings in the Startup section. In this section you can configure:

  • Disable user list in GDM.
  • Play sound at login.
  • Disable showing the restart button.
  • Login theme.

Obviously not every option is a gem, but the ability to hide the user list as well as disabling the restart button in the login screen can be very handy.

Finally you will want to take a look at File Type Manager in the System section. This allows you to manage all registered file types on your system.

I have only scratched the surface of Ubuntu Tweak–it really is an incredibly powerful and handy tool that any and all Ubuntu users/administrators should get to know. From this single window you have the ability to configure/administrate many items from the System menu.

The art of the small test program

It’s happened again. No matter how carefully you’ve tested each capability of the language, the library, or the framework you use. No matter how religiously you’ve built unit tests for each component. When you finally brought it all together into an application masterpiece, you get a failure you do not understand.

You try every debugging technique you know; you rewrite and simplify the most suspect passages; you stub out or eliminate entire components. Perhaps this helps you narrow the failure down to a particular region, but you still have no idea what’s going wrong or why. If you have the sources to the language or the library, you may get a lot further than if they’re proprietary, but perhaps you still lack the knowledge or the documentation to be able to make enough sense of the failure to solve the problem.

It’s time to get some help. You post questions on the fora, or you contact the author/vendor directly, but they can’t reproduce the problem. (Somehow, you knew that would happen.) They want you to send them a reproducing case. You direct them to your entire application, and the problem never gets resolved, because it’s just too much trouble. The end.

Okay, we don’t like that ending. How can we rewrite it? In the case of paid support we can stomp, yell, and escalate to force the vendor to spend time on the problem; but if it turns out to be too difficult to get the entire app running and debuggable, then they can still plead “unreproducible”. There is only so much that a vendor can do. Even if they stay on the problem, it could take a very long time to get to the bottom of it. Fortunately, there’s something we can do to help the vendor help us: It’s called the Small Test Program (STP).

“Whoa! Wait a minute! We already removed everything extraneous when we were attempting to debug this!” I hear you cry.

That may be true, but our goal then was to rule out other causes. You can almost always do more by shifting the goal to reducing the footprint of the test case. The two goals sound almost the same, and they overlap a lot, but they don’t cover entirely all the same ground. In the first case, we were trying to do everything we could to help ourselves to solve the problem. In the second, we want to do everything we can to help the developer to solve the problem. That means we need to take the following steps:

  • Remove reliance on a specific configuration. No doubt you’ve customized your development environment with all sorts of shortcuts and conventions to save yourself time; every one of those costs time, though, for someone who isn’t familiar with them. You either need to remove those dependencies and create a more vanilla example, or provide an instant setup for them that won’t be invasive. For instance, if you need the user to set certain environment variables, provide a script that does that and then launches the app. Preferably, eliminate the dependency on environment variables altogether — dependencies can add to the confusion by being set in more than one place, or not getting exported properly.
  • Eliminate all custom or third-party components that you can. You should have already done this, but it becomes even more important when submitting a failure. External components attract the finger of blame — as they should, because they often cause unforeseen problems. Rule them out. Furthermore, if the external components require installation and setup, that delays the developer from being able to look at the problem. Developers often have trouble getting these components to work on their system, which is all wasted time if they didn’t really need them to begin with.
  • Reduce the number of user steps required. If you think that one or two runs through the test case will reveal the problem, then your name must be Pollyanna. If they have to run your test a thousand times, every minute of elapsed execution time costs two work days. It’s actually more than that because people are human–every time the developers have to restart a long, arduous set of steps, they need a pause to sigh and wonder where they went wrong in life.
  • Clearly document the steps required. I don’t know how many times I’ve received something claiming to be the steps to reproduce a problem that reads “Run the app.” Unless the app is so simple that it requires no setup or interaction, and the failure is so obvious that not even [insert archetypal clueless person here] could miss it, this instruction will fail to reproduce. No matter how apparent it may seem, include every step–every setup command, the command to launch the app, and every input required. If you followed the previous steps, this shouldn’t be much.
  • Reduce the number of lines of code executed as much as possible. Maybe the entire program runs in two seconds, but if it executes 30,000 lines of code, then that’s at least 30,000 possible causes that the developer may have to rule out. Furthermore, it complicates debugging. If you can get the entire program down to “step, step, kaboom!” then you’re gold.
  • Include clear indications of failure. Don’t presume that the developer will recognize immediately that your Weenie Widget is 10 pixels too short — tell them so in the steps. Ideally, the application should scream out “Here’s where I’m failing!” when it’s run. Use assertions, or at least a printf or message box.
  • Include clear indications of success. How many times have I solved a problem presented by a test program, only to run into another failure immediately afterward? Did I fix a problem that they weren’t reporting, and now I’m seeing the one they meant? Usually, they know about the second one, but they just didn’t bother to prevent it since they had reproduced a failure with the first one. This is bad form. Ideally, you want your test program to be tailor-made for inclusion in a test suite so the same problem doesn’t get reintroduced. For that to happen, it needs to cross the finish line with flying colors. Let there be no doubt that it was successful.
  • Test your test. Run through the test as if you were the developer assigned to work on it to make sure you didn’t forget anything. Don’t run it on your development system, because your environment might be set up in a way that the developer’s isn’t. Use a virtual machine with a vanilla configuration to run the test and make sure it fails in exactly the way you intended. It could save you a few email round trips and avoid giving the impression that you don’t know what you’re doing.

Why you should create an STP
Why should you put the extra effort into creating an STP? It’s their bug, after all. Let them find it and fix it.

Most of my clients are software developers, so I’ve looked at this issue from both sides. I’ve been the recipient of hundreds (perhaps thousands) of failures to solve over the last 20 years, and I’ve had to submit my share of them to numerous software providers. I can tell you from my experiences that more than anything else–more than whether you pay the vendor to support the product or how much, more than all the screaming and yelling you can muster, more than the all the flattery you can lay on them, more than any reputation they may have for responding in a timely manner–the single most influential factor in determining how quickly the developers will resolve your problem is how clearly and concisely you’ve demonstrated the failure.

So, the next time you need to submit a problem report, remember the immortal words of Steve Martin: “Let’s get small.”

Chip Camden has been programming since 1978, and he’s still not done. An independent consultant since 1991, Chip specializes in software development tools, languages, and migration to new technology. Besides writing for TechRepublic’s IT Consultant blog, he also contributes to [Geeks Are Sexy] Technology News and his two personal blogs, Chip’s Quips and Chip’s Tips for Developers.

Data Protection Manager 2010 migration successes and challenges

In a September 2010 TechRepublic article, I discussed Westminster College’s migration from Symantec’s Backup Exec to Microsoft’s Data Protection Manager (DPM) 2010 and outlined our reasons for making the switch.

We were facing four challenges:

  • Backup Exec licensing. We had been using Backup Exec for quite some time and needed to deploy additional servers and services and be able to protect some new workloads, including Exchange 2010 and SharePoint 2010 data. We were out of licenses to protect these workloads and would have needed to upgrade the existing software as well.
  • Challenged backup window. Our backup window was starting to get a bit tight.
  • Lack of continuous protection. We were using a very traditional backup operation that relied on full backups on weekends and differential backups once per day throughout the week. This left significant opportunity for data loss in between backups.
  • Recovery time. When recovery operations needed to be performed, they could be monotonous, time-consuming tasks because we were still fully reliant on tape as our primary backup storage vehicle.

Since September 2010, we made significant progress in migrating our backup operations from Backup Exec to DPM 2010, although we still have a few workloads that reside on Backup Exec. Here’s an update on our migration progress, in which I share some successes we’ve had, challenges we’ve identified and new opportunities that have arisen to improve our backup and recovery capability.

Successes
All of our critical workloads are being well protected under DPM 2010, including all of our enterprise, mission-critical database applications, Exchange 2007 and 2010, SharePoint 2010, and our file services.

I’m incredibly impressed by DPM, but I would probably feel the same way about just about any disk-based backup and recovery tool due simply to the sheer speed of recovery. Several weeks ago, we had a need to restore a backup from the previous evening of our ERP database, but we needed to restore it with a different name so that it could be modified by our ERP vendor for an implementation project that we have underway. Previously, this kind of activity would have taken an hour or two; however, we decided to give it a go with DPM.

Between the time it took to stage the recovery and actually restore that database to a new name and location, we had invested a grand total of less than 10 minutes–for a 28 GB database.

My staff and I also learned that, although DPM doesn’t come right out and say that you can rename a database during a restore, you can easily do so by telling DPM to restore the database to an alternate SQL instance and then simply choose the original instance, provide the new database name, and tell DPM to what location in the file system the databases should be restored.

Our ERP vendor was pretty surprised when we emailed them less than 15 minutes after receiving their initial request for this “play” database letting them know that their request had been completed. In the long term, this kind of turnaround time is good for us, too. Recovery time is surprisingly fast with DPM. Of course, we’re recovering from disk over a 1 Gb Ethernet network in this example, so it should be faster than our previous tape-based recovery operations.

We’re protecting mission-critical workloads much more often that we’ve ever been able to in the past. For example, we have our database applications updating the DPM replica every 15 minutes to hour, depending on workload.

Challenges
The primary challenge that we still face is protection of our SharePoint 2007-based workloads; this is the last item still being protected by Backup Exec. The only limiting factor has been troubleshooting time, which we will get over the next couple of weeks. In the meantime, we’ve redirected Backup Exec-based protection to a disk-based virtual tape library.  From there, we protect the Backup Exec data with DPM so that we’re continuing to provide maximum protection to all data.

Another challenge is that we, unfortunately, have some Windows 2000-based services still in production that we had to find ways to protect.  We’ve been able to work around DPM’s inability to directly protect Windows 2000 machines by scheduling local backups and then simply handling those backups as file objects on other servers. We’re working hard to get away from these Windows 2000 services.

More about our future plans We house our backup systems outside our data center in another campus location that is, for all intents and purposes, underground. The location is not ideal from an accessibility standpoint, so we’ve been exploring other options. We could host backups completely off campus–and we will be doing so at some point–but as our primary backup mechanism, I don’t believe in hosting the service anywhere near the data center.

As the college has been working on new construction, we’ve worked with our developer to create what I believe is a perfect solution for the backup hosting challenge. In one of our new buildings (it’s about as far away from the data center as you can get and still stay on the campus network) the developers will be constructing in the basement a concrete bunker with 12-inch thick concrete walls and a concrete ceiling.  They will also be installing a 3 hour rated fire door and standalone cooling for us.  This room will be situated in the building so that it is as far underground as possible. In fact, on the other side of the outside wall will be nothing but earth.

Summary
The more I use DPM, the more satisfied I am with the product and the decision to move to it.  It has proven to be very fast, easy to manage, and robust. Overall, it has been a great addition to our backup and recovery arsenal.

Change a slide’s orientation in PowerPoint

Microsoft PowerPoint


Change a slide’s orientation in PowerPoint

You know that you can use portrait or landscape orientation in Word and Excel documents. What you might not know is that you can apply the same orientation setting to PowerPoint slides. Similar to pages in a document or report, you can change the orientation setting from slide to slide.

By default, slides are landscape. Choosing to change that default should be part of your design process. Switching from landscape to portrait, after the fact, will seldom produce results you’ll want to use.

To set a slide’s orientation, do the following:

2003 2007/2010
  1. From the File menu, choose Page Setup.
  2. In the resulting Page Setup dialog, check Portrait or Landscape in the Slides section or the Notes, Handouts & Outline section.
  1. Click the Design tab.
  2. Click the Slide Orientation dropdown in the Page Setup group and choose an option.

This tip won’t wow them at the newsgroups; it falls into the I didn’t know you could do that category. If you change a slide’s orientation, be sure to test it in as many environments as possible–it might look good on your development system but look squashed on another, especially a laptop.

Microsoft Word


How to remove the spacing between paragraphs

Word adds space between paragraphs–whether you want it to or not. If you display paragraph marks, you’ll not find any extra paragraph marks. This behavior is part of Word’s styling. When you press Enter to create a new paragraph, Word increases the line spacing to mark the change from one paragraph to another.

You can’t change the spacing between paragraphs using Backspace–the key you might press first, just from habit. Doing so will just create one big paragraph. Fortunately, you can change the spacing and Word is flexible enough to allow you to change the spacing for one paragraph, several paragraphs, or all paragraphs.

To change spacing between just two paragraphs, choose the paragraph below the space you want to remove and press [Ctrl]+0. If the first combination adds a bit more space, press [Ctrl]+0 a second time to remove the extra space.

You can remove the spacing between all paragraphs, as follows:

  1. Click Home | Paragraph dialog launcher (the small arrow in the lower right corner). In Word 2003, select Paragraph from the Format menu and click the Indents and Spacing tab.
  2. Check the Don’t Add Space Between Paragraphs Of The Same Style option.
  3. Click OK.

The change will be apparent in any new content, it will not affect existing content. To remove the space between existing paragraphs, you must select the text first. In addition, if you copy several paragraphs that contain spacing, that spacing will remain intact.

When this option enabled, you can’t use the Spacing option in the Paragraph group on the Page Layout tab. You must select the paragraphs and uncheck the Don’t Add Space… option first.

One last thing–this property affects only the current document. If you want to set this as a default property, click the Set As Default button in the Paragraph dialog box.

Microsoft Excel


Use conditional formatting to format even and odd rows

Many users like to shade every other row to make sheets more readable, especially when there’s lots of data. Sometimes restrictions can complicate things, or at least you might think so initially. For instance, you might think that shading only odd or even rows a harder task than shading every other row, but you’d be wrong. Using conditional formatting, formatting only odd or even rows is simple:

  • To format even rows only, use the conditional formula =EVEN(ROW())=ROW().
  • To format odd rows only, use the conditional formula = ODD(ROW())=ROW().

Now, let’s work through a quick example:

  1. Select the rows you want to format. To select the entire sheet, click the Sheet Selector (the gray cell that intersects the row and column headers).
  2. Click the Home tab.
  3. Click the Conditional Formatting dropdown in the Styles group and choose New Rule.
  4. From the Select A Rule Type list, choose Use A Formula To Determine Which Cells To Format.
  5. In the Format Values Where This Formula Is True field, enter =EVEN(ROW())=ROW().
  6. Click Format.
  7. Specify any of the available formats. For instance, to shade all even rows red, click the Fill tab, click Red, and click OK twice.

Notice that the even rows are now red. To shade odd rows, repeat the above steps. In step 4, enter the formula = ODD(ROW())=ROW(). In step 6, choose a contrasting color, such as green. This technique isn’t just for shading, it’s for formatting in general.

Okay, that’s hideous, but it makes the point well–with little effort, you can format all even or odd rows. Please don’t ever do this to a real sheet unless you’re pranking someone!

7 overlooked network security threats for 2011

No one working in network security can complain that the issue has been ignored by the press. Between Stuxnet, WikiLeaks server attacks and counterattacks, and the steady march of security updates from Microsoft and Adobe, the topic is being discussed everywhere.

IT workers who have discovered that consolidation, off-shoring, and cloud computing have reduced job opportunities may be tempted to take heart in comments such as Tom Silver’s (senior vice president for Dice.com) claim that “there is not a single job position within security that is not in demand today”. This and similar pronouncements by others paint a rosy picture of bottomless security staff funding, pleasant games of network attack chess, and a bevy of state-of-the-art security gadgets to address threats. Maybe.

In these challenging times, separating hype from visionary insight may be a tall order. Yet it’s important to strike a sensible balance, because there are problems both with underestimating the problem as well as in overhyping the value of solutions. This situation became readily apparent when making a list of overlooked threats for the upcoming year. The task of sorting through the hype must not become a cause that only managers will be inspired to take up.

Table A summarizes a modest list of security threats that are likely to be overlooked in the coming year. The list thus adds to the mélange of worry-mongering, but at least the scenarios are plainly labeled as worst case scenarios.

Threat Area Worst Case Scenarios
1. Insider Threat Enterprise data including backups destroyed, valuable secrets lost, and users locked out of systems for days or even weeks.
2. Tool Bloat Backlash Decision-makers become fed up with endless requests for security products and put a freeze on any further security tools.
3. Mobile Device Security A key user’s phone containing a password management application is lost. The application itself is not password-protected.
4. Low Tech Threats A sandbox containing a company’s plan for its next generation of cell phone chips is inadvertently exposed to the public Internet.
5. Risk Management A firm dedicates considerable resources to successfully defend its brochure-like, e-commerce-less web site from attack, but allows malware to creep into the software of its medical device product.
6. SLA Litigation Although the network administrator expressed reservations, a major customer was promised an unattainable service level for streaming content. The customer has defected to the competition and filed a lawsuit.
7. Treacheries of Scale A firm moves from a decentralized server model to a private cloud. When the cloud’s server farm goes offline, all users are affected instead of users in a single region.

Table A. Worst Case Scenarios for Overlooked Network Security Threats

1. Insider threat
Millions of dollars can be spent on perimeter defenses, but a single employee or contractor with sufficient motivation can easily defeat those defenses. With sufficient guile, such an employee could cover his tracks for months or years. Firms such as Symantec Vontu have taken a further step and characterized the insider threat issue as “Data Loss Prevention” (DLP). Also in this category are attacks on intellectual property, which tend to be overlooked in favor of more publicized losses.

2. Tool bloat backlash
Recent TSA changes to airport security demonstrate that the public’s appetite for security measures has limits. The same is true for network security. As demands for more and more tools taking an increasingly larger percent of the IT budget mount, backlash is inevitable. Many tools contribute to a flood of false positives and may never resist an actual attack. There is a network security equivalent of being overinsured.

3. Mobile device security
There’s lots of talk about mobile device security, but despite prominent breaches employing wireless vectors, many enterprises haven’t taken necessary precautions.

4. Low-tech threats
Addressing exotic threats is glamorous and challenging. Meeting ordinary, well-understood threats, no matter how widespread, is less interesting and is thus more likely to be overlooked. Sandboxes, “test subnets,” and “test databases” all receive second class attention where security is concerned. Files synchronized to mobile devices, copied to USB sticks, theft of stored credentials, and simple bonehead user behaviors (“Don’t click on that!”) all fit comfortably into this category. Network administrators are unlikely to address low tech threats because more challenging tasks compete for their attention.

5. Risk management
Put backup and disaster recovery in this category, but for many, having servers with only one NIC card or relying upon aging, unmonitored switches and exposed cable routing are equally good use cases. Sadly, most organizations are not prepared to align risks with other business initiatives. To see where your organization stands in this area, consider techniques such as Forrester’s Lean Business Technology maturity for Business Process Management governance matrix.

6. SLA Litigation
Expectations for service levels are on the rise, and competitive pressures will lead some firms to promise service levels that may not be attainable. Meanwhile, expectations for service levels by the public continue to rise.

7. Treacheries of scale
There will be the network management version of the Quantas QF32 near-disaster. Consequences of failure, especially unanticipated failure, increase as network automation is more centralized. Failure points and cascading dependencies are easily overlooked. For instance, do network management tools identify SPOF? A corollary is that economies of scale (read network scalability) lead directly to high efficiency threats – that is, risks of infrequent but much larger scale outages.

What’s a network administrator to do? Address the issues over which some control can be exerted, and be vigilant about the rest. Too much alarm-sounding is likely to weaken credibility.

PowerShell script for getting Active Directory information

For a work project, I needed to compare Active Directory actual information to what was present in our ERP system, as well as match that with information about the user’s Exchange 2003 mailbox.

I wrote a “down and dirty” PowerShell script to extract a number of fields from Active Directory and write the extracted information into a CSV file. My overall plan was to compare the three data sets–the Active Directory information, the Exchange mailbox information, and the ERP information–using Excel, while making sure there was information in all three data sets that would link the data sets to each other.

Here is more information about the project, followed by the PowerShell script I wrote.

Project details
Our reasons for this project:

  • The organization has 16,000 Exchange mailboxes, and we wanted to ensure that only users who should have mailboxes do.
  • We also wanted to ensure that Active Directory accounts for departed employees are inactive and are marked for removal.

These were the project challenges:

  • In a separate report, I had to use WMI to gather Exchange mailbox information since Exchange 2003 doesn’t include PowerShell.
  • The organization has more than 600,000 user accounts in Active Directory, most of which are valid; only about 20,000 of these accounts are employees, while the rest are customers. However, in some cases, the customers were also temporary employees, so there was a need to search the entire Active Directory database for potential employee accounts.

A look at the PowerShell script
Notes:
This PowerShell script was intended for one-time use and that creates a very different development environment, at least to me. I was going for immediate functionality rather than elegance (I am not a programmer), which is why I consider this a “down and dirty” PowerShell script.

I’ll take a line-by-line (or, in some cases, a section-by-section) look at what this PowerShell script does and explain my thinking.

# Start of script

I needed to clear the screen before script execution to make sure there was no clutter that would confuse me when I looked at display results.

Cls

I added a processing loop to break down the Active Directory information into usable chunks. Prior to adding this loop, my script crashed because the machine on which I was running it ran out of memory trying to handle more than 600,000 records at once. Each item in the “targetou” section is an Active Directory organizational unit. Immediately below, you will see a line that outputs to the screen that OU is currently being processed. By displaying information at run-time, I know exactly where I am in a process.

foreach ($targetou in 'A','B','C','D','E','F','G','GUESTACCOUNTS','H','I','J','K','L','CONTRACTOR',
'M','N','O','P','Q','R','S','T',','U','V','W','X','Y','Z')
{
"Processing information for OU $targetou"

The $targetou variable above is the lowest point in the Active Directory hierarchy at which I worked. The $domainrootpath variable builds the full LDAP string to the OU against which the script was to run for each iteration.

$DomainRootPath='LDAP://OU='+$targetou+',OU=ORGUSER,DC=contoso,DC=com'

The next several lines create and populate an Active Directory searcher object in PowerShell.

$adsearch = New-Object DirectoryServices.DirectoryAdsearch([adsi]$DomainRootPath)

I limited the kinds of objects that would be returned. The line below limits results to user objects.

$adsearch.filter = "(objectclass=user)"

The PropertiesToLoad items below were necessary for the reporting task I had ahead of me. These lines modify the behavior of the Active Directory search by forcing it to return only what is specified rather than returning everything. Because of the size of the data set, I needed to limit the returned data to only what was essential.

$adsearch.PropertiesToLoad.AddRange(@("name"))
$adsearch.PropertiesToLoad.AddRange(@("lastLogon"))
$adsearch.PropertiesToLoad.AddRange(@("givenName"))
$adsearch.PropertiesToLoad.AddRange(@("SN"))
$adsearch.PropertiesToLoad.AddRange(@("DisplayName"))
$adsearch.PropertiesToLoad.AddRange(@("extensionAttribute1"))
$adsearch.PropertiesToLoad.AddRange(@("extensionAttribute2"))
$adsearch.PropertiesToLoad.AddRange(@("comment"))
$adsearch.PropertiesToLoad.AddRange(@("title"))
$adsearch.PropertiesToLoad.AddRange(@("mail"))
$adsearch.PropertiesToLoad.AddRange(@("userAccountControl"))
$adsearch.Container

This line executes the search based on the parameters specified above. For each iteration of the foreach loop, Active Directory will search the organizational unit for that loop and return all of the attributes specified above for each user account. The results of the execution will be stored in the variable named users. Unfortunately, as it exists, the information from this array can’t be simply written to a CSV file since that CSV file would contain only the Active Directory object name and an entry called “System.DirectoryServices.ResultPropertyCollection.” I needed to expand out and capture the individual Active Directory elements, which I do later in the script.

$users = $adsearch.findall()

As the script was running, I wanted to know how many objects were returned from each loop iteration, so I added the line below to show how many user accounts were being handled.

$users.Count

I initialized an array variable into which I’d write the individual Active Directory elements we wanted to capture.

$report = @()

I started another loop that executes for each Active Directory account for which we wanted to capture information.

foreach ($objResult in $users)
{

I needed to create a variable that houses the properties for an individual record. (There are other ways to do this, but I like to break things down to make them more readable.)

$objItem = $objResult.Properties

I created a new temporary object into which to write the various Active Directory attributes for this single record being processed in this processing iteration (remember, this is repeated for each record returned from Active Directory).

$temp = New-Object PSObject

For each individual Active Directory property that was returned from the Active Directory searcher, I added a named property to the temp variable for this loop iteration. Basically, this breaks out the single Active Directory record for a user into its individual components, such as name, title, email address, and so forth. (Case-sensitivity matters in this section.)

$temp | Add-Member NoteProperty name $($objitem.name)
$temp | Add-Member NoteProperty title $($objitem.title)
$temp | Add-Member NoteProperty mail $($objitem.mail)
$temp | Add-Member NoteProperty displayname $($objitem.displayname)
$temp | Add-Member NoteProperty extensionAttribute1 $($objitem.extensionattribute1)
$temp | Add-Member NoteProperty extensionAttribute2 $($objitem.extensionattribute2)
$temp | Add-Member NoteProperty givenname $($objitem.givenname)
$temp | Add-Member NoteProperty sn $($objitem.sn)
$temp | Add-Member NoteProperty useraccountcontrol $($objitem.useraccountcontrol)

I added the results of this individual record to the primary array into which we’re capturing the full results from the search for later export to CSV.

$report += $temp
}

This line creates the name of the file that will be written. I created a new file for each organizational unit processed.

$csvfile="AD-"+$targetou+".csv"

The line writes the entire file to disk and then notifies the user that processing for this OU has completed.

$report | export-csv -notypeinformation $csvfile
"Wrote file for $targetou"
}

Summary
For my purposes, this PowerShell script captured exactly the information that I needed, and I was able to complete my comparison task. If you know of a more elegant way to get this information, please post it in the discussion.

A simple user primer for init

Many Unix-like systems–particularly those that follow the SysV model–make use of the concept of the runlevel. On these systems, runlevels are different modes of operation, some of which can be customized by the system administrator.

In the Linux world, the typical assignment of functionality to runlevels is:

  • 0: system halted
  • 1: single user mode
  • 2: single user mode with networking
  • 3: text-only multi-user mode
  • 4-5: multi-user modes
  • 6: restart

Switching runlevels is simple from the command line. The init command takes a number as an argument that can be used to switch runlevels.

telinit
The actual init daemon starts when the system starts, and manages process startup and shutdown for the current runlevel. When you use the init command within a root user shell, it executes telinit, however. The telinit program can be used to switch to the runlevel corresponding to the numeric argument given to the init command. This means that the command init 0 will shut down the system, init 1 will shut down processes and enter single user mode, and init 6 will restart the system.

Three non-numeric arguments can also be used.

  • The letter q requests that init reload its configuration. It is largely unnecessary in many current Linux-based operating system configurations.
  • The letter s, can be used to enter single user mode as well. Care should be taken when doing so, however; init s does not shut down current processes the way init 1 does.
  • The letter u requests that init re-execute itself.

For the most part, numeric values will be the only arguments you will need to give the init command (and, by extension, the telinit command). In fact, most often you would not need anything but init 0 or init 6, with an occasional need to use init 1. It is typical for Linux-based systems to be set up to automatically boot into the appropriate runlevel for normal operation.

Configuration of which processes are started and stopped with a given runlevel is primarily handled by the contents of /etc/rcN.d directories. Within these directories, symlinks to scripts in the /etc/init.d directory indicate which processes should be started or stopped when entering or leaving a given runlevel.

BSD Unix init
The BSD Unix init command serves a similar role, but it does not use the SysV init system. On BSD Unix systems, init is actually a utility that executes the rc utility. In some ways much like SysV init, BSD rc manages startup of processes on boot. The init command is used with a somewhat different set of arguments, however, because it does not use SysV runlevels:

  • init 0: shut down the system
  • init 1: enter single user mode
  • init 6: restart the system
  • init c: block further logins
  • init q: rescan the ttys file

The q option serves a purpose similar to the same argument to the Linux/SysV version of the init command.

Configuration of the rc system can vary across systems that use it. In the case of FreeBSD, most relevant configuration is handled by the /etc/rc.conf file, and by rc scripts in the /etc/rc.d directory. See the rc.conf manpage for details.

shutdown
Many Unix-like systems provide a shutdown command that performs much the same purpose as certain init commands, and typically adds some convenient features such as sending warnings to user shells, delaying change of operating mode for a specified period of time or at a particular time of day, and kicking all users out of their logins and preventing all new logins. The shutdown command varies from system to system, and its manpage should be consulted for specifics on a given Unix-like OS.

This is not a comprehensive guide
Obviously, an in-depth, comprehensive survey and explanation of the entire system related to the init command is beyond the scope of a single article. With a little bit of enthusiasm and time, however, a lot can be learned about how to manage system operation modes via commands like init and shutdown, and to configure the underlying system, from manpages.

Simple filters in Perl, Ruby, and Bourne shell

In Eric Raymond’s The Art of Unix Programming, he referred to the usefulness of a type of utility called a “filter”: Many programs can be written as filters, which read sequentially from standard input and write only to standard output.

An example provided in the book is of wc, a program that counts characters (or bytes), “words”, and lines in its input and produces the numbers counted as output. For instance, checking the contents of the lib subdirectory for the chroot program files could produce this output:

~/tmp/chroot/lib> ls
libc.so.7    libedit.so.7    libncurses.so.8

You could pipe the output of ls to wc to get the number of lines, words, and characters:

~/tmp/chroot/lib> ls | wc
3    3    39

Writing your own filter scripts is incredibly easy in languages such as Perl, Ruby, and the Bourne shell.

Perl script
Perl’s standard filter idom is quite simple and clean. Some people claim that Perl is unreadable code, but they have probably never read well-written Perl.

#!/usr/bin/env perl

while (<>) {
# code here to alter the contents of $_
print $_;
}

To operate on the contents of a file named file.txt:

~> script.pl file.txt

You can also use pipes to direct the output of another program to the script as a text stream:

~> ls | script.pl

Finally, you can call the script without piping any text stream or naming any file as a command line argument:
~> script.pl

If you do so, it will listen on standard input so that you can manually specify one line of input at a time. Telling it you are done is as easy as holding down [Ctrl] and pressing [D], which sends it the end-of-file (EOF) character.

If you want to do something other than alter the contents of Perl’s implicit scalar variable $_, you could print some other output instead. The $_ variable contains one line of input at a time, which can be used in whatever operations you wish to perform before producing a line of output. Of course, output does not need to be produced within the while loop either if you do not want to. For instance, to roughly duplicate the standard behavior of wc is easy enough:

#!/usr/bin/env perl

my @output = (0,0,0);

while (<>) {
$output[0]++;
$output[1] += split;
$output[2] += length;
}

printf “%8d%8d%8d\n”, @output;

Unlike wc, this does not list counts for several files specified as command line arguments separately, nor list the names of the files in the output. Instead, it simply adds up the totals for all of them at once. This simplistic script does not offer any of wc‘s command line options, either, but it serves to illustrate how a filter can be constructed.

The other examples will only cover the basic filter input handling idiom itself, and leave the implementation of wc-like behavior as an exercise for the reader.

Ruby script
Ruby does not have a single idiom that is obviously the “standard” way to do it. There are at least two options that work quite well. The first uses a Ruby iteratory method, for typically Rubyish style:

#!/usr/bin/env ruby

$<.each do |line|
# code here to alter the contents of line
print line
end

The second uses a while loop, but does not use the kind of “weird” symbol-based variable that some programmers remember only with distaste from Perl:

while line = gets
# code here to alter the contents of line
print line
end

Operating on the contents of a file, taking input interactively, or accepting a text stream as input works the same as for the equivalent Perl script.

Shell script
This is the least powerful filter idiom presented here because the Bourne shell does not provide the same succinct facilities for input handling as Perl and Ruby:

#!/bin/sh

while read data; do
# code here to alter the contents of $data
echo $data

done

To operate on the contents of a file named file.txt, you have to use a redirect, because feeding the script a filename as a command line argument simply results in an error. Calling the script with a redirect is still simple enough, though:

~> script.sh < file.txt

The redirect character < is used to direct the contents of file.txt to the script.sh process as a text stream. You can also use pipes to direct the output of another program to the script as a text stream, as with the other examples:

~> ls | script.sh

While the behavior you see with the Perl and Ruby examples can be duplicated using the Bourne shell, it requires a bit more code to do so, using a conditional statement to deal with cases where the filename is provided as a command line argument without the redirect as well as where a text stream is directed to the program by some other means. It hardly seems worth the effort to avoid using a redirect.

Go forth and code
In my TechRepublic article Seven ideas for learning how to program, I suggested that writing Unix admin scripts could serve as a great way for new programmers to practice the craft of coding. Filters are among the most useful command line utilities in a Unix environment, and as demonstrated here, they can be surprisingly easy to write with a minimum of programming skill.

Regardless of your programming experience, these simple filter script idioms in three common sysadmin scripting languages can help any Unix sysadmin do his or her job better.

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

How to add a watermark to your Word documents

Microsoft Word


How to add a watermark to your Word documents

A watermark is a picture or text that appears behind a document’s contents. It’s usually a light grey or other neutral color so it doesn’t distract too much from the document’s purpose. Usually, a watermark identifies a company or the document’s status. For instance, a watermark might say confidential, urgent, or display a symbolic graphic. Adding a watermark to a Word document is a simple process:

  1. Click the Page Layout tab.
  2. Click Watermark in the Page Background group.
  3. Choose a watermark from the gallery. Or…
  4. Choose Custom Watermark. The Printed Watermark dialog presents three options. You can remove a custom watermark or insert a picture or text as watermark.
  5. Click OK once you’ve made your selections.

If you're using Word 2003, add a watermark as follows:

  1. From the Format menu, choose Background.
  2. Click Printed Watermark.
    To insert a picture as a watermark, click Picture Watermark. Then click Select Picture, navigate to find the picture file, and click Insert.
    To insert a text watermark, click Text Watermark and select or enter the text you want.
  3. Set any additional options.
  4. Click OK.

The watermark will display as part of the background on every page. Adding a watermark to a document is simple, yet effective.

Microsoft Excel


Excel parsing expressions

You probably wouldn’t store first and last names in the same cell, but you might have to work with a legacy workbook that does. Or, you might import data from a foreign source where the names are combined into one field. Fortunately, Excel has several string functions, Right(), Left(), Find(), Len(), and Mid() that can parse the name components into separate entries.

First, the easy part; parse the component to the left using the simple expression:
=LEFT(A2,FIND(” “,A2)-1)

It makes no difference whether the component is the first or last name. In the case of Robin Banks, the FIND() function returns the value 6, but the expression subtracts 1 from the results. Consequently, the expression extracts the first five characters. If you want to extract the space character, omit the -1 component.

The inconsistency of the entries–some have middle initials and some don’t–makes extracting the last name, a bit more complicated. You might try the following expression, but as you can see, it doesn’t work as expected:
=RIGHT(A2,LEN(A2)-FIND(” “,A2,FIND(” “,A2)+1))

If the entry doesn’t contain two space characters, the second FIND() returns an error value. Use the following expression instead:
=IFERROR(RIGHT(A2,LEN(A2)-IFERROR(FIND(” “,A2,FIND(” “,A2)+1),FIND(” “,A2))),A2)

IFERROR() handles the errors, but the logic is similar.

There’s one last step–returning  the middle initial:
=MID(A2,FIND(” “,A2)+1,IFERROR(FIND(” “,A2,FIND(” “,A2)+1)-FIND(” “,A2)-1,0))

If there’s no middle initial, this expression returns an empty string instead of an error.

It’s worth mentioning that the Text To Columns feature is an expression-less solution if the entries are consistent. In addition, to learn more about using string functions, read Save time by using Excel’s Left, Right, and Mid string functions. Finally, IFERROR() is new to Excel 2007. The logic for these expressions is the same in 2003, but use ISERROR() to handle the error values.

Microsoft Access


Access parsing expressions

In the Excel post above, I showed you a few expressions for parsing inconsistent name entries. The logic of relying on the position of specific characters is just as useful in Access, although Access doesn’t use the same functions.

The Access table below stores names in firstname lastname format in a single field named Name. Some, but not all entries have middle initials. Using the following expression, extracting the first name is fairly simple:
FirstName: Left([Name],InStr([Name],” “)-1)

The InStr() function returns the position of the first space character. Consequently, the Left() function extracts characters from the beginning of the entry, up to the first space character. Omit the -1 component if you need to include the space character.

Extracting the last name takes just a bit more work:
LastName: Right([Name],Len([Name])-InStrRev([Name],” “))

This expression applies the same logic, plus some. The length of the entire name minus the position of the last character returns the name of characters to extract, beginning with the last character. Using Robin Banks, this expression evaluates as follows:
Right(“Robin Banks”,11-6)
Right(“Robin Banks”,5)
Banks

As you might suspect by now, extracting the middle initial takes even more work:
MI: IIf(InStrRev([Name],” “)>InStr([Name],” “),Mid([Name],InStr([Name],” “)+1,InStr([Name],” “)-2),””)

The IIf() function compares the position of the first space character and the second space character. If they’re the same, there’s only one space character and consequently, no middle initial (and I could’ve written the condition that way, just as easily). If the position of the last space character is greater than the position of the first space character, there’s a middle initial (or something!) between the first and last names. The Mid() function then uses the position of the first space character to extract two characters between the first and last names. Those two characters, in this case, are the middle initial and the period character following each initial. If some names have a period character and some don’t, this expression will return inconsistent results. Using Dan D. Lyons, this expression evaluates as follows:

IIf(7>4,Mid("Dan D. Lyons",4+1,4-2)," ")
IIf(True,Mid("Dan D. Lyons",5,2)," ")
Mid("Dan D. Lyons",5,2)
D.

When parsing inconsistent data, you have to find some kind of anchor. In this example, the anchor is the position of the space characters. It’s important to note that the ” ” component in all of the expressions is not an empty string. There’s a literal space character between the two quotation marks.

Specify a failover host for HA clusters in VMware

VMware vSphere’s High Availability (HA) feature allows virtual machines to be restarted on other hosts in the event of a host failure. I have had a love-hate-hate-love-hate relationship with HA throughout the years; I’m keeping score of how many times it has saved me compared to biting me in the rear end.

Putting my mixed feelings about the feature aside, I recently gravitated towards a new configuration option for HA clusters in certain situations. The option to specify a failover host for HA clusters allows a specific ESXi (or ESX) host to be designated as the host to absorb the workload on the failed ESXi host. This option is a property of an HA cluster (Figure A).

Figure A


Click the image to enlarge.

This option is set for a test cluster of only two hosts, but some of the attributes are visible quite easily. First, the vesxi4.rwvdev.intra host is designated as the HA failover host; this means that virtual machines are not intended to run on that host in a normal running configuration. This is at the expense of the other host, because there is one extremely busy host and one relatively idle host.

The use of the designated failover host offers the opportunity for administrators to capture some benefits compared to the other HA options. The first option is that you could place a lower-provisioned host in the admission control inventory. This can include using a 2 CPU (socket) host instead of a 4 CPU host that would exist in the rest of the cluster, thus reducing licensing costs. Another benefit is each host that is not the failover host would be allowed to go higher in its utilization, as an admission control policy would not prohibit additional virtual machines on that host.

There are a number of critical decision points on HA, but I would be remiss if I did not mention what I feel to be the authoritative resource for this feature: the HA Deepdive from Duncan Epping’s Yellow Bricks blog. Duncan has good information about all of HA, including the designated failover host option.

Probably the best use case for using HA and designating a failover host is to set individual virtual machine HA event response rules. A good example of this would be to not perform an HA failover on development virtual machines, should they be intermixed in a cluster. Figure B shows this configured in an HA cluster where all test and development virtual machines are configured to not have an HA event restart.

Figure B


Click the image to enlarge.

This is the proverbial “it depends” configuration item. There are plenty of factors that go into considering this HA cluster arrangement, but the designated failover option doesn’t seem to be used that frequently.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

Create an easy to use Linux calendar sharing server

In my ever-continuing quest to bring Linux to business, I found one of the biggest missing pieces was the ability for Linux mail clients to easily share out calenders with other Linux users. Most of the Linux mail clients (Evolution, Thunderbird, etc) offer the ability to publish calendars or use remote calendars.

Although it’s a fairly simple task to share those calendars out, the task of correctly setting up a connecting calendar server is not. That is, unless you happen upon Radicale CalDAV Server. This particular calendar server is about the easiest CalDAV server I have ever installed and used.

Radicale can share calendars with most open source calendar tools and features:

  • Shares calendars using CalDAV or HTTP.
  • Supports events and todos.
  • Works out-of-the-box with little to no configuration required.
  • Warns users on concurrent edition.
  • Limits access by authentication.
  • Secures connections.

Let’s take a look at how Radicale can be set up on a Ubuntu 10.10 machine

Step 1: Installation
To install Radicale on Ubuntu simply open up the Ubuntu Software Center, search for radicale, and click Install. You will need to enter your sudo password for the installation to complete. When the software is installed you can close out the Software Center and start working with Radicale.

If you are installing in a non-Ubuntu distribution you might have to install from source. You will want to make sure you have Python installed.

Step 2: Configuration
Believe it or not, this step is optional, as Radicale should work out of the box for you. On my Ubuntu machine hosting the Radicale Server, no configuration was necessary. But more than likely you are going to want to set up some configuration options (such as authentication). To do this, the file ~/.config/radicale/config must be edited (or created, if it’s not there).

The default configuration file looks like:

[server]
# CalDAV server hostname, empty for all hostnames
host =
# CalDAV server port
port = 5232
# Daemon flag
daemon = False
# SSL flag, enable HTTPS protocol
ssl = False
# SSL certificate path (if needed)
certificate = /etc/apache2/ssl/server.crt
# SSL private key (if needed)
key = /etc/apache2/ssl/server.key

[encoding]
# Encoding for responding requests
request = utf-8
# Encoding for storing local calendars
stock = utf-8

[acl]
# Access method
# Value: fake | htpasswd
type = fake
# Personal calendars only available for logged in users (if needed)
personal = False
# Htpasswd filename (if needed)
filename = /etc/radicale/users
# Htpasswd encryption method (if needed)
# Value: plain | sha1 | crypt
encryption = crypt

[storage]
# Folder for storing local calendars,
# created if not present
folder = ~/.config/radicale/calendars

The above configuration should be fairly obvious. Just make the changes that suit your needs and save the file.

Once you have the configuration saved (or you need no configuration), all you have to do is start the Radicale daemon with the command radicale. You might want to set this to start up automatically. From within GNOME you can do this by clicking System | Preferences | Startup Applications and adding the radicale command.

Creating (or connecting to) calendars
It is very simple to create or connect to Radicale from both Evolution and Thunderbird (with the Lightning addon). When connecting to (or creating) a new calendar you will be using a Network calendar with the following addresses:

For Thunderbird:

http://ADDRESS_TO_CALSERV:5232/USER/CALENDAR

For Evolution:

caldav://ADDRESS_TO_CALSERV:5232/USER/CALENDAR

Where ADDRESS_TO_CALSERV, USER, and CALENDAR are all unique to your system. If the calendar you want to connect to already exists just check inside the user’s (the user that starts the daemon on the target machine) ~/.config/radicale/ directory for this information. NOTE: Both calendar types will be CalDAV.

That’s all there is to it. You will now be able to add/view entries on the calendar(s) on the server. The only pitfall is that you have to manually refresh the calendars in order to see changes. That’s a small price to pay for such simplicity.

Jack Wallen was a key player in the introduction of Linux to the original Techrepublic. Beginning with Red Hat 4.2 and a mighty soap box, Jack had found his escape from Windows. It was around Red Hat 6.0 that Jack landed in the hallowed halls of Techrepublic.

5 tips for effectively managing your software assets

Properly tracking and organizing software licenses are major responsibilities of every IT manager. Organizations that have a clear understanding of their software assets and how they are utilized will be equipped to remain license-compliant and to make better purchasing decisions. Managing software assets effectively can also save enterprises thousands of dollars per year.

But many businesses have discovered that their software asset records are neither accurate nor current. Although the awareness of the importance of true asset management has increased, organizations often don’t do an adequate job of managing the risk associated with being noncompliant. Many businesses also purchase unnecessary, excess application licenses, which could result in overspending and inaccurate budgeting.

To avoid noncompliance risk and reduce software costs, businesses need to deploy a software asset management program that includes a process to ensure all applications are appropriately recorded and categorized. Here are five tips to help IT managers meet this challenge.

1: Automate the process
IT departments have historically placed an emphasis on enterprise efficiency by relying on learned best practices. But when tracking software assets, many administrators rely on antiquated manual tools while running from computer to computer. An automated solution reduces the excessive time spent on managing software assets while eliminating the manual reporting processes. With the greater insight into software allocation, as well as usage and license compliance, IT is prepared for vendor and internal inquiries. IT departments can also proactively make accurate software budget recommendations and assignments.

2: Integrate with asset management
To be cost effective and easy to use, software license management tools must be integrated into an organization’s overall asset management solution. This solution should also include software distribution, OS deployment, patch management, and remote management, since all these challenges are so closely related. Having integrated solutions has become increasingly important as IT departments face external vendor audits and internal budget cuts. An automated software license management solution that is a part of an overall asset management plan helps businesses improve efficiency and remain compliant while reducing software purchases and support costs.

3: Prepare for vendor audits
Technology vendors have recently increased their efforts to eliminate the unsanctioned use of software by performing surprise audits. Removing installed software or purchasing more licenses after an audit notification has been given is one of the worst mistakes you can make when dealing with an auditor. Organizations should conduct practice audits on a regular basis using software license management tools. In addition, they should designate a response team to ensure that their software license management practices are enough to pass an audit. An automated solution provides fast and clear access to application portfolios by generating detailed reports at any time.

4: Align software purchases, contracts, and support
Underutilized software wastes IT dollars. A software license management program can help you accurately plan your budget and gives you accurate insight into software usage. It should not only help you find out what licenses you currently have but also show you how often they are being used and by whom. Effective software management tools enable IT to free up software and negotiate the purchase price of software products. They can also help you develop a comprehensive strategy for aligning purchases, contracts, and support. This in turn avoids unnecessary purchases and keeps maintenance costs to a minimum.

5: Rely on an easy-to-implement tool that offers a one-year ROI
When researching options, look for software license management tools that are easy to implement and for which the solution provider can demonstrate a return on investment within the first year. Leading solutions should offer cost-efficient controls as well as compliance monitoring by combining processes, resources, and regulatory requirements into a single management framework. Also look for a solution that provides easy-to-read, customizable, on-demand dashboard reports to assist with vendor audits and to gain a greater understanding of product usage.

With the right solution, IT departments can avoid the risk of noncompliance using a process that does not strain staff resources. IT administrators can also improve the company’s bottom line by saving thousands of dollars per year in licensing fees.

Adee McAninch is the product marketing manager at Numara Software. This article first appeared in ZDNet Asia’s sister site TechRepublic.com.

Test your DNS name servers for spoofability

What does DNS cache poisoning mean to us? A lot. Using cache poisoning, bad guys can redirect Web browsers to malicious Web sites. After that, any number of bad things can happen.

DNS primer
Being human, it’s easier to remember names than numbers. But, computers prefer numbers. So we use a process called Domain Name System (DNS) to keep track of both. It translates domain names into numeric addresses.

Let’s use Web browsers as an example. The user types in the name of a Web site and hits enter. The Web browser sends a DNS query to the DNS name server being used by the computer. The DNS name server checks its database for the Web site’s name and responds with the associated IP address. With the IP address, the Web browser retrieves the Web site.

Too predictable

Two more pieces of the puzzle are needed to understand how Dan Kaminsky can poison DNS server caches. They are:

  • The query transaction ID (allows DNS responses to be matched to DNS queries) is incremented in numeric order and always uses the same port.
  • Applications using DNS explicitly trust the domain name/IP address exchange.

The predictability and blind acceptance allowed him to:

  • Create a rogue DNS response.
  • Send it to the computer or DNS name server asking for the associated IP address.
  • The DNS response is accepted as long as the query transaction ID match and it was received before the authoritative DNS name server’s response.

After the dust settled, Kaminsky realized this technique could be used to redirect web browsers to malicious Web sites.

Increase randomness

To prevent redirection, Kaminsky came up with an elegant solution. There are 2 to the power of 16 possible query transaction IDs and 2 to the power of 16 possible source ports. Why not randomize query transaction IDs. He also suggested using random source ports instead of the same one each time.

If you mix up the selection process for each, the number of potential combinations becomes 2 to the power of 32. That makes it sufficiently difficult to guess.

Okay, we have a solution. But, as I alluded to earlier, not all DNS name servers are using the prescribed fixes. Thankfully, there are ways to tell if the DNS name server is updated.

Testing for spoofability

I was listening to a Security Now podcast with Steve Gibson and Leo Laporte. The topic was “Testing DNS Spoofability”. In the broadcast, Gibson mentioned he developed an online test to see if DNS name servers are susceptible to cache poisoning.

The test is called DNS Nameserver Spoofability Test. The program exchanges a large quantity of DNS queries between the DNS name server being tested and what Gibson calls a Pseudo DNS Nameserver (courtesy of GRC.com):

The reason so many queries are needed is to accurately test the randomness of the query transaction ID and source port selection.

Router Crash Test

During development of the spoofability test, Gibson encountered something. The test was crashing certain consumer-grade routers. This link is to the list of routers that do crash. The Web page also explains why this is occurring.

Scatter charts

I use OpenDNS for my DNS servers. The following slide shows OpenDNS employs the fixes, creating a random scatter chart:

The next slide (courtesy of GRC.com) represents a DNS server using a selection algorithm that is far less random:

The final example (courtesy of GRC.com) is telling. Both the query transaction ID and the source port are being incremented in a linear fashion. Although the values are changing, it is in a predictable fashion. Not good.

Find a public DNS provider

There are alternatives if you find the assigned DNS name servers are not randomizing the entries sufficiently. I mentioned earlier, that I use OpenDNS. It is free, and it is the only public DNS service that offers protection from DNS rebinding attacks. This GRC.com Web page has a list of other public DNS providers.

Final thoughts

To avoid problems resulting from being redirected to a malicious Web site, please test the DNS name servers used by your computer.

Michael Kassner has been involved with IT for over 30 years and is currently a systems administrator for an international corporation and security consultant with MKassner Net.

Backdoor ways to reboot a Windows server

When you need to reboot a Windows server, you’ll occasionally encounter obstacles to making that happen. For instance, if remote desktop services aren’t working, how can you reboot the server? Here is a list of tricks I’ve collected over the years for rebooting or shutting down a system when I can’t simply go to the Start Menu in Windows.

  • The shutdown.exe command: This gem will send a remote (or local) shutdown command to a system. Entering shutdown /r /m \\servername /f /t 10 will send a remote reboot to a system. Shutdown.exe is current on all modern Windows systems; in older versions, it was located on the Resource Kit. For more details, read this Microsoft KB article on the shutdown.exe command.
  • PowerShell Restart-Computer: The equivalent of the command above in PowerShell is:
    Start-Sleep 10
    Restart-Computer -Force -ComputerName SERVERNAME
  • Hardware management device: If a device such as an HP iLO or Dell DRAC is in use, there is a virtual power button and remote screen console tool to show the system’s state regardless of the state of the operating system. If these devices are not configured with new servers, it’s a good idea to have them configured in case the mechanisms within the operating system are not available.
  • Virtual machine power button: If the system in question is a virtual machine, all hypervisors have a virtual power button to reset the system. In VMware vSphere, be sure to select the option to Shut Down The Guest Operating System instead of the Power Off; this will make the call to VMware Tools to make it a clean shutdown. If that fails, the Power Off button will be the next logical step.
  • Console walkthrough: In the situation where the server administrator does not have physical access to the system, walking someone through the process may be effective. For security reasons, basically a single user (domain or locally) can be created with the sole permission of rebooting the server. That person could log on as this temporary user, and then it is immediately destroyed after the local shutdown command is issued. Further, that temporary user could be created with a profile to run the reboot script on their logon to not have any interaction by the person assisting the server administrator.
  • Configure a scheduled task through Group Policy: If you can’t access the system in any other mainstream way–perhaps the Windows Firewall is turned on and you can’t get in to turn it off–set a GPO to reconfigure the firewall state and slip in a reboot command in the form of the shutdown.exe command executing locally (removing the /m parameter from above). The hard part will be getting the GPO to deploy quickly.
  • Enterprise system management packages: Packages such as Symantec’s Altiris and Microsoft System Center agents communicate to the management server and can receive a command to reboot the server.
  • Pull the plug: This is definitely not an ideal approach, but it is effective. For physical servers, if a managed power strip with port control is available, a single system can have its power removed and restored.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

RIM promises to set up filters in Indonesia

Setting up Internet filters in Indonesia is “top priority” for Research In Motion (RIM) and the company is cooperating with the local government and carriers to implement porn blockers on its BlackBerry services.

In an e-mail statement to ZDNet Asia, the Canadian phonemaker said it has been in discussions with the government and carriers in Indonesia to set up an Internet filter and remains committed to implement satisfactory technical solutions with its partners.

According to a report Monday by BusinessWeek, Indonesia Communication and Information Technology Minister Tifatul Sembiring said RIM has until Jan. 21 to begin filtering porn sites or face legal actions including revocation of its services.

Muhammad Budi Setyawan, the ministry’s director general of post and telecommunications at the ministry, said the government is not targeting RIM but pornographic Web sites and will meet with RIM and six mobile service providers on Jan. 17 with its request to filter pornographic content. He noted that if RIM refuses to filter such materials, the company might be asked to to shut down its browser service.

According to RIM’s statement, the BlackBerry maker agrees with the ministry’s “sense of urgency” on the issue.

Tifatul since last August has ordered Internet service providers in Indonesia to block pornographic Web sites.

The move, according to Reporters Without Borders, was sparked by the circulation of videos reportedly revealing local celebrities having sex, leading to critics blaming the Internet for declines in values in the Indonesian society.

The government’s demand for anti-porn Web filters has been met with dissent by some citizens, noted a report by AFP, which quoted a Twitter user who questioned if blocking BlackBerry services would be effective in reducing the flow of pornographic content in the country.

According to the The Jakarta Post, Tifatul also outlined other demands on his Twitter account, such as setting up a server in the country for law enforcement officials to track down corruption suspects.

Indonesia is considered an important market for RIM in the Asia-Pacific region and often singled out as a success story for the BlackBerry maker. In a ZDNet Asia report last April, RIM’s Asia-Pacific director, Gregory Wade, pointed to how prepaid BlackBerry services in the country had played a key role in boosting the company’s growth in Indonesia.

Analyst: Get ready for all-in-one app stores

With more devices equipped to access app stores for content, the market will see the emergence of an all-in-one app store able to recognize the type of device used and push apps relevant to that platform, observes an analyst who points to major players such as Apple and Google which are heading in that direction.

Bryan Wang, Asia-Pacific associate vice president of connectivity at Springboard Research, noted that consumers today own multiple devices and would want to be able to share their applications across these devices to have a seamless mobile Internet experience.

“We do believe that there will be a market for one-stop app shops able to recognize the device accessing it and push relevant content to end-users,” Wang said in an e-mail interview.

To this end, he pointed to operators such as Apple and Google which he said were already heading in that direction. With Apple launching its Mac App Store on Jan. 6 and Google unveiling its Chrome Web Store a month earlier on Dec. 7, both companies are extending the mobile app environment into the desktop arena, he added.

“[Opening up these desktop app stores] is one of the steps for Apple and Google to move to a one-stop shop direction,” the Springboard analyst said. “When PC app stores get larger market traction in the next year or two, we think it would be natural [for such vendors] to have their current mobile and desktop app stores combined.”

Wang noted that Apple and Google currently are the only two operators that have “the capability to attract large volumes of customers in the next couple of years”, a component that is necessary for a one-stop app shop to flourish.

The potential of a one-stop app shop also drew a positive response from Malcolm Lu, a product packaging designer, and a user of Apple’s MacBook Pro, iPad slate device and Google’s Chrome Web Store.

He told ZDNet Asia that he is “for the idea” of having an all-in-one app store as it would help him save time in looking over apps that are compatible with his respective devices.

Multi platforms not so soon
Lu, however, said it is “doubtful” a “universal” app store that is able to cut across different platforms and devices will be available any time soon.

He noted that mobile platform operators today are focused only on introducing and maintaining their own respective app stores, such as Apple with the iTunes App Store and Research In Motion with its BlackBerry App World. And this trend does not seem to be ending in the near future, he added.

Furthermore, there are differences in programming apps for the various mobile platforms and multiple devices that apps run on, making it “harder to develop a universal app store”, he noted.

Wang agreed that universal app stores will not see light of day in the near future. He noted that in order to create a successful platform- and device-agnostic one-stop app store, the operator must already have an established brand name and customer buy-in for its existing services.

The operator should also have a multi-vendor, multi-technology approach in its business strategy in order to want to create the one-stop app store to begin with, and there are not many such companies in the market today, he added.

Wang identified Facebook and Samsung as two players that could potentially fulfill one or both factors, but whether the companies would eventually set up a universal app store remains to be seen.

Industry trends appear to support these observations. Besides smartphones, tablets and desktops, carmakers are also jumping onto the app store bandwagon, further complicating the app store landscape.

The Wall Street Journal reported on Saturday that automobile manufacturers such as General Motors and Toyota have announced plans to transform dashboards into Internet-connected vehicles. General Motors, for instance, expanded its OnStar system, which was first developed to provide directions and emergency services, to include apps that access the car system and push information such as vehicle diagnostics to car owners.

According to Gartner analyst, Thilo Koslowski, the auto industry’s focus on apps comes as carmakers look for new ways to differentiate their products from the competition. He said in the Wall Street Journal report: “Internet-connected autos will be among the fastest-growing segments in four years.” Koslowski also predicted that more than half of all new premium vehicles in the United States will support apps by 2013 and mass-market cars will reach that level in 2016.

5 tips for easy Linux application installation

Most people don’t realize how easy it is to install applications on modern releases of the Linux operating system. As the package managers have evolved into powerful, user-friendly tools, the task of installation has become equally user-friendly. Even so, some users encounter traps that seem to trip them up at every attempt.

How can you avoid these traps and be one of those Linux users happily installing application after application? With these five tips, that’s how.

1: Get to know your package manager
Probably the single most user-friendly package management system, on any operating system, is the Ubuntu Software Center. This tool is simply an evolution of the typical GUI front end for Linux package management systems. All you have to do is open that tool, search for the application you want to install, mark it for installation, and click Apply. And because there are thousands upon thousands of applications available, you can happily spend hours upon hours finding new and helpful applications to install.

2: Install the necessary compilers
If you have an application thatmust be installed from source, you will need to have the necessary compilers installed. Each distribution uses either a different compiler or a different release of a compiler. Some distributions, such as Ubuntu, make this task simple by having a single package to install (issue the command sudo apt-get install build-essential). Once you have the compiler installed, you can then install applications from source.

3: No .exe allowed
This is one of those concepts that is so fundamental, yet many users don’t understand it. The .exe installers are for Windows only. For Linu,x you are looking for extensions such as .deb or .rpm for installation. The only way to install .exe files on a Linux machine is with the help of WINE, but most new users should probably steer clear of this tool. If you find a binary file online (one that works with your distribution), you should be prompted by your package manager if you want to install the downloaded file. If you have WINE installed,and your system is configured correctly, you will prompted (with the help of WINE) to install even .exe files.

4: Understand dependencies
This is probably one of the trickiest aspects of installing packages in Linux. When using a package manager (such as PackageKit, Synaptic, or Ubuntu Software Center) the dependencies are almost always taken care of automatically. But if you are installing from source, you will have to manually install the dependencies. If you don’t get all the dependencies installed (and installed in the correct locations), the application you are installing will not work. And if you try to force the installation (without installing all dependencies), the application will not work properly.

5: Always start with the package manager
There are several reasons why distributions use package managers. Outside of user-friendliness, the single most important reason for package managers is to ensure system cohesiveness. If you use a patchwork of installation methods, you can’t be sure that your system is aware of everything installed. This is also true for tools like Tripwire, which monitor changes in your system. You want to be as uniform and as standardized as you can in your installations. To that end, you should ALWAYS start with your package manager. Only when you can’t find a precompiled binary for your distribution should you turn to installing from source. If you remain consistent with this installation practice, your system will run smoother longer. If you mix and match, you might find some applications are not aware of other applications, which can really cause dependency issues.

Simple and friendly
Users do not have to fear installing applications on Linux. By following some simple guidelines, anyone (regardless of experience level) can have an easy time managing their Linux desktop. With powerful, accessible package managers, nearly every modern Linux distribution offers the user every tool they need to add, remove, and update their applications with ease and speed.

Using OData from Windows Phone 7

My initial experiences with Windows Phone 7 development were a mixed bag. One of the things that I found to be a big letdown was the restrictions on the APIs and libraries available to the developer. That said, I do like Windows Phone 7 development because it allows me to use my existing .NET and C# skills, and keep me within the Visual Studio 2010 environment that has been very comfortable to me over the years. So despite my initially poor experience in getting starting with Windows Phone 7, I was willing to take a few more stabs at it.

One of the apps I wanted to make was a simple application to show the local crime rates. The U.S. government has this data on Data.gov, but it was only available as a data extract, and I really did not feel like building a Web service around a data set, so I shelved the idea. But then I discovered that the “Dallas” project had finally been wrapped up, and the Azure Marketplace DataMarket was live.

Unfortunately, there are only a small number of data sets available on it right now, but one of them just happened to be the data set I wanted, and it was available for free. Talk about good luck! I quickly made a new Windows Phone 7 application, and tried to add the reference, only to be stopped in my tracks with this error: “This service cannot be consumed by the current project. Please check if the project target framework supports this service type.”

It turns out, Windows Phone 7 launched without the ability to access WCF Data Services. I am not sure who made this decision, seeing as Windows Phone 7 is a great match for Azure Marketplace DataMarket, it’s fairly dependent on Web services to do anything useful, and Microsoft is trying to push WCF Data Services. My initial research found only a CTP from March 2010 to provide this information. I asked around and found out that code to do just this was made announced at PDC recently and was available for free on CodePlex.

Something to keep in mind is that Windows Phone 7 applications must be responsive when performing processing and must support cancellation of “long running” processes. In my experience with the application certification process, I had an app rejected for not supporting cancellation even though it would take at most three seconds for processing. So now I am very cautious about making sure that my applications support cancellation.

Using the Open Data Protocol (OData) library is a snap. Here’s what I did to be able to use an OData service from my Windows Phone 7 application:

  1. Download the file ODataClient_BinariesAndCodeGenToolForWinPhone.zip.
  2. Unzip it.
  3. In Windows Explorer, go to the Properties page for each of the DLLs, and click the Unblock button.
  4. In my Windows Phone 7 application in Visual Studio 2010, add a reference to the file System.Data.Services.Client.dll that I unzipped.
  5. Open a command prompt, and navigate to the directory of the unzipped files.
  6. Run the command: DavaSvcUtil.exe /uri:UrlToService /out:PathToCSharpFile (in my case, I used https://api.datamarket.azure.com/Data.ashx/data.gov/Crimes for the URL and .\DataGovCrime.cs for my output file). This creates a strongly typed proxy class to the data service.
  7. I copied this file into my Visual Studio solution’s directory, and then added it to the solution.
  8. I created my code around cancellation and execution. Because I am not doing anything terribly complicated, and because the OData component already supports asynchronous processing, I took a backdoor hack approach to this for simplicity. I just have booleans indicating a “Running” and “Cancelled” state. If the event handler for the service request completion sees that the request is cancelled, it does nothing.

There was one big problem: The OData Client Library does not support authentication, at least not at a readily accessible level. Fortunately, there are several workarounds.

  • The first option is what was recommended at PDC: construct the URL to query the data manually, and use the WebClient object to download the XML data and then parse it manually (using LINQ to XML, for example). This gives you ultimate control and lets you do any kind of authentication you might want. However, though, you are giving up things like strongly typed proxy classes, unless you feel like writing the code for that yourself (have fun).
  • The second alternative, suggested by user sumantbhardvaj in the discussion for the OData Client Library, is to hook into the SendingRequest event and add the authentication. You can find his sample code on the CodePlex site. I personally have not tried this, so I cannot vouch for the result, but it seems like a very reasonable approach to me.
  • Another alternative that has been suggested to me is to use the Hammock library instead.

For simple datasets, the WebClient method is probably the easiest way to get it done quickly and without having to learn anything new.

While it is unfortunate that the out-of-the-box experience with working with OData is not what it should be, there are enough options out there that you do not have to be left in the cold.

Disclosure of Justin’s industry affiliations: Justin James has a contract with Spiceworks to write product buying guides; he has a contract with OpenAmplify, which is owned by Hapax, to write a series of blogs, tutorials, and articles; and he has a contract with OutSystems to write articles, sample code, etc.

Justin James is an employee of Levit & James, Inc. in a multidisciplinary role that combines programming, network management, and systems administration. He has been blogging at TechRepublic since 2005.

Change Outlook’s Calendar color to better highlight the current day

Microsoft Office Outlook


Change Outlook’s Calendar color to better highlight the current day

In Outlook’s Month view, the current day is a bit washed out. As you can see below, the default blue is a tad lighter than other highlighted areas.

It isn’t impossible to find, but it does seem to fade into the background. (It’s even more obscure in Outlook 2003.)

The color is in keeping with the theme, but if you want the current day to pop out a bit, try changing the default color.

In Outlook 2003 and 2007, do the following to change this property:

  1. From the Tools menu, choose Options and click the Preferences tab (if necessary)
  2. On the Preferences tab, click Calendar Options in the Calendar section.
  3. In the Calendar Options section, choose a new color from the Default Color dropdown.
  4. Click OK.

If you’re using Outlook 2010, do the following:

  1. Click the File menu and then choose Options.
  2. Click Calendar in the left pane.
  3. In the Display Options section, choose a new color from the Default Calendar Color dropdown.
  4. Click OK.

This property changes the Calendar color, not the selected day, so it’s a big change. You’ll want to choose a color that contrasts with the current day’s border, as shown above. In this case, the orange border is easy to see next to the green–at least, I think it is. Personal preference strongly figures into this particular choice.

This tip won’t set your world on fire or anything. It’s just one of those simple things that you can control, so you should if if makes your day a bit easier!

Microsoft Word


Use Word’s Replace to transpose a column of names

You’ll often see a column of names entered in a Word document either as a list or part of a table. Listing the names is no problem, but changing their order after they’re entered could be.

For instance, let’s say your document contains a list of names entered in firstname lastname format, but you want them in lastname, firstname format. Do you have to re-enter them? No, there’s a simple wildcard trick you can use with Word’s Replace feature that will take care of the transposing for you.

To get Word to transform a list or column of names, do the following:

  1. Select the list of names you want to transpose.
  2. From the Edit menu, choose Replace. In Word 2010, click Replace in the Editing group on the Home tab.
  3. Click the More button and check the Use Wildcards option. This is an important step–if you miss it, this technique won’t work.
  4. In the Find What control, enter (<*>) (<*>), with a space character between the two sets.
  5. In the Replace With control, enter the following characters \2, \1, with a space character before the second slash character.
  6. Click Replace All. Word will transpose the first and last names and separate them with a comma character.
  7. When Word asks you to expand the search, click No, and then Close to return to the document.


Wildcard explanation

Once you understand the wildcards, the whole trick is easily exposed:

  • (): The parentheses aren’t true wildcards, not in a matching sense. They allow you to divide a pattern into logical sequences.
  • <>: The brackets mark the beginning and ending of a word or phrase.
  • \: The slash character replaces characters, and is used with a number that specifies a bracketed component (above).

In this case, the Find What code splits the two names into two separate sequences. The \2 component in the Replace What code replaces the contents of the first sequence with the contents of the second sequence. The \1 component replaces the contents of the second sequence with the contents of the first. As you can see, you’re not limited to just transposing first and last names. With these wildcard tools, you can rearrange quite a bit of content!

Microsoft Excel


How to sum values in an Excel filtered list

Filters are a powerful and easy-to-use feature. Using filters, you can quickly limit data to just the records you need to see. Summing filtered records is another matter. You might try a SUM() function but you might get a surprise–well, I can promise you’ll get a surprise.

The figure bellows shows a filtered list. You can tell by the row numbers to the left that many rows are hidden. (We’ll skip how the actual filter works. To learn more about that, read How to use And and Or operators with Excel’s Advanced Filter.

The next figure shows what happens when you try to sum the filtered values. You can easily tell that the result isn’t correct; the value is too high, but why? The SUM() function is evaluating all the values in the range D14:D64, not just the filtered values. There’s no way for the SUM() function to know that you want to exclude the filtered values in the referenced range.

The solution is much easier than you might think! Simply click AutoSum–Excel will automatically enter a SUBTOTAL() function, instead of a SUM() function. This function references the entire list, D6:D82, but it evaluates only the filtered values.

About SUBTOTAL()

Although the SUBTOTAL() function references the entire list of values in column D, it evaluates only those in the filtered list. You might think that’s because of the first argument, the value 9. This argument tells Excel to sum the referenced values. The following table lists this argument’s acceptable values:

Evaluates hidden values Ignores hidden values Function
1 101 AVERAGE()
2 102 COUNT()
3 103 COUNTA()
4 104 MAX()
5 105 MIN()
6 106 PRODUCT()
7 107 STDEV()
8 108 STDEVP()
9 109 SUM()
10 110 VAR()
11 111 VARP()

At this point, you might be saying, Wait a minute! The value 9 is supposed to evaluate hidden values. Shouldn’t the correct argument be 109? It’s a valid question and I have an explanation, I just don’t think it’s a great explanation: SUBTOTAL() ignores rows that aren’t included in the result of a filter, regardless of the argument you specify. It’s a quirk–just one of those little details you need to know about the function. Whether you use 9 or 109, SUBTOTAL() will evaluate only the visible values–it will not evaluate hidden values.

10 ways to keep hard drives from failing

Hardware prices have dropped considerably over the last decade, but it’s irresponsible not to care for the hardware installed on machines. This is especially true for hard drives. Hard drives are precious commodities that hold the data employees use to do their jobs, so they should be given the best of care. Inevitably, those drives will die. But you can take steps to prevent a premature hard disk death. Let’s examine 10 such steps to care for the health of your drives.

1: Run chkdsk
Hard disks are eventually going to contain errors. These errors can come in the shape of physical problems, software issues, partition table issues, and more. The Windows chkdsk program will attempt to handle any problems, such as bad sectors, lost clusters, cross-linked files, and/or directory errors. These errors can quickly lead to an unbootable drive, which will lead to downtime for the end user. The best way I have found to take advantage of chkdsk is to have it run at next boot with the command chkdsk X: /f where X is the drive you want to check. This command will inform you the disk is locked and will ask you if you want to run chkdsk the next time the system restarts. Select Y to allow this action.

2: Add a monitor
Plenty of applications out there will monitor the health of your drives. These monitors offer a host of features that run the gamut. In my opinion, one of the best choices is the Acronis Drive Monitor, a free tool that will monitor everything from hard drive temperature to percentage of free space (and everything in between). ADM can be set up to send out email alerts if something is amiss on the drive being monitored. Getting these alerts is a simple way to remain proactive in the fight against drive failure.

3: Separate OS install from user data
With the Linux operating system, I almost always separate the user’s home directories (~/) from the OS installation onto different drives. Doing this ensures the drive the OS is installed upon will enjoy less reading/writing because so much of the I/O will happen on the user’s home drive. Doing this will easily extend the life of the drive the OS is installed on, as well as allow you to transfer the user data easily should an OS drive fail.

4: Be careful about the surrounding environment
Although this seems like it should go without saying, it often doesn’t. On a daily basis, I see PCs stuck in tiny cabinets with zero circulation. Obviously, those machines always run hot, thus shortening the lifespan of the internal components. Instead of shoving those machines into tight, unventilated spaces, give them plenty of breathing room. If you must cram a machine into a tight space, at least give it ventilation and even add a fan to pull out that stale, warm air generated by the PC. There’s a reason why so much time and money have gone into PC cooling and why we have things like liquid cooling and powerful cooling systems for data centers.

5: Watch out for static
Here’s another issue that should go without saying. Static electricity is the enemy of computer components. When you handle them, make sure you ground yourself first. This is especially true in the winter months or in areas of drier air. If you seem to get shocked every time you touch something, that’s a good sign that you must use extra caution when handling those drives. This also goes for where you set those drives down. I have actually witnessed users placing drives on stereo speakers, TVs, and other appliances/devices that can give off an electromagnetic wave. Granted, most of these appliances have magnets that are not strong enough to erase a drive. But it’s a chance no one should take.

6: Defragment that drive
A fragmented drive is a drive being pushed to work harder than it should. All hard drives should be used in their most efficient states to avoid excess wear and tear. This includes defragmenting. To be on the safe side, set your PC(s) to automatically defrag on a weekly basis. This works to extend the life of your drive by keeping the file structure more compact, so the read heads are not moving as much or as often.

7: Go with a solid state drive
Solid state drives are, for all intents and purposes, just large flash drives, so they have no moving parts. Without moving parts, the life of the drive (as a whole) is naturally going to be longer than it would if the drive included read heads, platters, and bearings. Although these drives will cost more up front, they will save you money in the long run by offering a longer lifespan. That means less likelihood of drive failure, which will cause downtime as data is recovered and transferred.

8: Take advantage of power save
On nearly every OS, you can configure your hard drive to spin down after a given time. In some older iterations of operating systems, drives would spin 24/7–which would drastically reduce the lifespan of a drive. By default, Windows 7 uses the Balanced Power Savings plan, which will turn off the hard drive after 20 minutes of inactivity. Even if you change that by a few minutes, you are adding life to your hard drive. Just make sure you don’t shrink that number to the point where your drive is going to sleep frequently throughout the day. If you are prone to take five- to 10-minute breaks often, consider lowering that time to no less than 15 minutes. When the drive goes to sleep, the drive is not spinning. When the drive is not spinning, entropy is not working on that drive as quickly.

9: Tighten those screws
Loose mounting screws (which secure the hard drive to the PC chassis) can cause excessive vibrations. Those vibrations can damage to the platters of a standard hard disk. If you hear vibrations coming from within your PC, open it and make sure the screws securing the drive to the mounting platform are tight. If they aren’t, tighten them. Keeping your hardware nice and tight will help extend the life of that hardware.

10: Back up
Eventually, that drive will fail. No matter how careful you are, no matter how many steps you take to prevent failure, the drive will, in the end, die a painful death. If you have solid backups, at least the transition from one drive to another will be painless. And by using a backup solution such as Acronis Universal Restore, you can transfer a machine image from one piece of hardware to another piece of hardware with very little issue.

Jack Wallen was a key player in the introduction of Linux to the original TechRepublic. Beginning with Red Hat 4.2 and a mighty soap box, Jack had found his escape from Windows. It was around Red Hat 6.0 that Jack landed in the hallowed halls of TechRepublic.

Five tips for finding a cloud solution that’s ready for your users

Cloud computing is here to stay. It has quickly earned a reputation as a powerful business enabler, based on benefits such as scalability, availability, on-demand access, rapid deployment, and low cost. IT-savvy users in development and test functions have adopted the cloud model to accelerate application lifecycles. And with recent innovations in self-service access, users in consulting, training, and sales demo areas are also becoming the direct consumers of cloud services.

As these mainstream users adopt cloud services, many companies find “infrastructure-oriented” cloud services to be intimidating and difficult to use, since they were designed for IT pros. To be of value to functional users, a business cloud solution must be simple and self-service oriented, much like iTunes. This is especially important because many companies do not have sufficient IT resources to help set up, code, and customize cloud services.

A business cloud solution should be usable–not just codeable–from day one. Here are some steps you can take to determine whether a cloud solution is usable for your business.

1: Verify that the cloud directly addresses your business problem
What business problem are you trying to solve with the cloud? Having this type of focus can help you avoid the technology trap. If you’re evaluating a cloud solution for multiple functional users, including support, training, or business analysts, be sure that the cloud solution addresses their needs. A cloud that offers pure infrastructure will make it hard for functional users to accomplish business tasks without a UI framework to guide the workflow. If you are moving to the cloud to enable better collaboration across the team, ensure the cloud service provider offers a granular user access model that enables teams to assign rights to users based on their roles.

2: Focus on usability
Today’s enterprise business users need a simple self-service cloud solution that enables them to implement new ideas and collaborate with customers. Usability includes requirements such as configurability, self-service access, collaboration, visibility, and control. Ask yourself these questions:

  • Can the cloud be easily configured for different use-cases?
  • Does it deliver team management capabilities to enforce policies and role-based access?
  • Can your employees collaborate with prospects, customers, and partners and work on parallel streams without being constrained?
  • Does the cloud provide detailed usage reports and control mechanisms?

These are key requirements to enable business agility no matter the size of your organization or the technical maturity of your team. These capabilities will be applauded by your business users, as they don’t have time to build new IT skill sets and sit through hours of cloud training.

3: Determine whether the cloud runs existing applications without any rewrites
Most users are already familiar with the business and technical applications they use today, whether it’s email, training, or sales demo applications. Clouds that power these applications without any changes will deliver immediate value across your organization. Over the years, we have learned firsthand that business users won’t wait for IT to build or rewrite applications for use in the cloud. Time is money, and neither the business user nor the IT department has any to waste. As a result, the ability to run existing applications without any changes is a key factor in determining whether a cloud is easy to use.

4: Assess whether the cloud aligns cloud operating costs with business value
Cloud services do not require an upfront capital investment, but a usage-based pricing model can lead to sticker shock. To ensure that your cloud costs are in alignment with business value, see whether the cloud provider offers a service that measures the value you receive on a per-user basis. You can also ask whether the cloud provider offers distinct pricing for users at different levels. This can help you avoid paying the same fee for light and heavy users within your organization. Find out whether the cloud allows you to apply quotas to individuals and business units to cap usage at soft or hard budget limits. You will also want to ask whether you can automatically suspend resources when they are not in use to avoid the overuse of the cloud and resulting costs.

5: Pay special attention to responsive support
Successfully adopting new technologies, such as cloud computing, often requires a responsive support organization that can attend to your needs. Find out whether you can call a cloud provider directly or whether you must work through an online form or email inquiry to communicate about a cloud service. Also ensure that the support team will respond to your inquiries within a few hours versus a day or more.

The payoff
By following these steps to determine the cloud solution that’s right for your users, your organization will be well equipped to drive business agility, reduce costs, and accelerate your key business activities.

Sundar Raghavan is chief product and marketing officer at Skytap, a provider of cloud automation solutions. He is an industry veteran with an 18-year career in product and marketing roles at Google, Ariba, Arbor Software (now part of Oracle), and Microstrategy.

Enable a distribution list’s moderation features in Exchange 2010

Exchange 2010 includes a feature that has been needed since Exchange started supporting distribution groups: moderated distribution groups. With distribution groups, users who have the rights to send messages to the list can do so with unfettered access. If you’re able to send to the list, you can send anything as often as you like. This may create a situation in which users get too much mail that, in many cases, won’t pertain to them. It increases server load, and even worse, it’s an inefficient way to do business. I speak from experience.

We have three common distribution groups at Westminster College–we have one for students, one for faculty, and one for staff–with way too many people allowed to send way too much mail. One of the reasons that our college is moving to Exchange 2010 is that we’re planning to make significant use of moderated distribution groups in Exchange 2010. We’ll couple the implementation with some other policy- and technology-based mechanisms to better target messages at groups to which the messages pertain and get rid of our current scattershot approach to messaging.

The group creation process
The creation of a moderated distribution list starts out like any other list; in fact, you don’t actually create a moderated distribution list–you enable moderation features on an existing distribution group. You probably already know how to create a distribution group in Exchange, but if you don’t, here’s a quick run through: From the Exchange Management Console, go to Recipient Configuration | Distribution Group; from the Actions pane, click New Distribution Group and follow the wizard’s instructions.

Group moderation features
Once a group is created, you can enable moderation features by opening the group properties (right-click the group and choose Properties). On the Properties page, go to the Mail Flow Settings tab (Figure A). In this dialog box, select the Message Moderation option and then click the Properties button to open the Message Moderation window (Figure B).

Figure A

The Mail Flow Settings tab for the distribution group

Figure B

The Message Moderation window

In the Message Moderation window, select the checkbox next to Messages Sent To This Group Have To Be Approved By A Moderator; this enables the list’s moderation features. Next, choose which users will be designated as moderators for the group.  If you don’t choose anyone, the group owner — that is, the person who created the group — will be responsible for message moderation.

Look at Figure C, and you’ll see that I attempted to send a message to the new list, and the moderator has been notified. It’s up to the moderator to decide whether to approve or reject the message. Clicking the Approve button simply allows the message to be sent. Clicking Reject brings you to a question: Simply Reject The Message, or Reject With Additional Comment. In Figure D, you see the message that is sent to the sender when a message is rejected.

Figure C

Approve or reject the message

Figure D

The message was rejected.

Important note
Before you make heavy use of moderated distribution lists, you should make sure that the message doesn’t have to pass through any non-Exchange 2010 Hub Transport servers. Older Hub Transport servers will simply pass messages on to group members and ignore moderation options.

Scott Lowe has spent 15 years in the IT world and is currently the Vice President and Chief Information Officer for Westminster College in Fulton, Missouri. He is also a regular contributor to TechRepublic and has authored one book, Home Networking: The Missing Manual (O’Reilly), and coauthored the Microsoft Exchange Server 2007 Administrator’s Companion (MS Press).

Smartphone enterprise security risks and best practices

If your organization allows users to connect their smartphones to the company network, you need to consider the following potential security risks and then develop policies for addressing those issues. I also list 10 security best practices for your company’s smartphone policies.

Potential smartphone security risks:
Lack of security software

Smartphones can be infected by malware delivered across the Internet connection, or from an infected PC when the phone is connected to the PC over USB to sync data. It’s even possible to infect the phone via a Bluetooth connection. It’s a good idea to require that those users who connect their smartphones to your network install security software on the devices.

Mobile security software is available for all of the major smartphone platforms. Some of the most popular mobile security suites include Kaspersky Mobile Security, Trend Micro Mobile Security, F-Secure Mobile Security, and Norton’s mobile security products.

Security bypass
Some phones make it easy to bypass security mechanisms for the convenience of the user. This makes it a lot easier and less frustrating for those who are trying to set up their phones to connect, but it also defeats the purpose of those security measures.

For example, I was able to easily set up an Android phone (Fascinate) with an Exchange Server account despite the fact that it notified me that there was a problem with the certificate. It simply asked me if I wanted to accept all SSL certificates and set it up anyway. I clicked Yes and was connected to my mail. On a Windows Phone 7 device, that same message gave me no option for bypassing the certificate problem. I had to import the certificate to the device and install it before I could access the mail. This was obviously more trouble, but also more secure.

Web security
Web browsers on smartphones have gotten a lot better and are actually usable. However, the web is a major source of malicious code, and with a small screen, it’s more difficult for users to detect that a site is a phishing site. The malware can then be transferred onto the network from the phone. To protect the network, you should use a corporate firewall that does deep packet inspection of the smartphone traffic.

The Wi-Fi threat
Most modern smartphones utilize the wireless carrier’s 3G or 4G network, as well as connect to Wi-Fi networks. If users connect their phones to an unsecured Wi-Fi network, they become vulnerable to attack. If company information (such as a network password) is stored on the phone, this creates a real security issue. If the user connects back to the corporate network over a public Wi-Fi network, it could put the entire company network at risk. Users should be required to connect to the company network via an SSL VPN, so that the data traveling between the phone and the company network will be encrypted in transit and can’t be read if it’s intercepted.

Data confidentiality
If users store business-related information on their smartphones, they should be required to encrypt the data in storage, both data that is stored on the phone’s internal storage and on flash memory cards. Interestingly, a recent article in Cellular News notes that a Goode Intelligence survey found that 64 percent of users don’t encrypt the confidential data stored on their smartphones. This is despite the fact that another survey by Juniper Networks found that more than 76 percent of users access sensitive information with their mobile devices.

In the past, this could be justified by the amount of processing power required to encrypt data and the slow processors on the phones. Today’s phones, however, boast much more powerful hardware; the Motorola Droid 2 Global, for example, has a 1.2 GHz processor.

You also need to consider cached data in smartphone applications that are always running. Some applications display updates on the screen that could contain confidential data, as well. This is another reason to password-protect the phone. Smartphones should be capable of being remotely wiped if lost or stolen.

Physical security
Because of their highly portable nature, smartphones are particularly prone to loss or theft, resulting in unauthorized persons gaining physical access to the devices. In addition, some people may share their phones with family members or loan them to friends from time to time. If those phones are set up with corporate email or VPN software configured to connect to the corporate network, for example, this is a security problem.

A basic measure is to require that users safeguard their devices by enabling PIN or password protection to get into the operating system when you turn the phone on or to unlock it. Most smartphones include this feature but most users don’t enable it because it takes a little more time to enter the PIN/password each time. This will protect from access by a casual user who finds the phone or picks it up when the owner leaves it unattended. However, those features can often be defeated by a knowledgeable person.

Android 2.0.1 had a bug that made it easy to get to the homescreen without entering the PIN by simply hitting the Back button when a call came in on the locked Droid. The iPhone had a similar issue in versions 2.0.1 and 2.0.2, which let you get around the security by hitting Emergency Call and double clicking the Home button.

In the future, PINs and passwords may be replaced by biometric or facial recognition systems.

Security best practices for smartphone policies
Smartphone security in the business environment requires a two-pronged approach: protect the phones from being compromised and protect the company network from being compromised by the compromised phones. Here are some security best practices that you can incorporate into your smartphone policies.

  1. Require users to enable PIN/password protection on their phones.
  2. Require users to use the strongest PINs/passwords on their phones.
  3. Require users to encrypt data stored on their phones.
  4. Require users to install mobile security software on their phones to protect against viruses and malware.
  5. Educate users to turn off the applications that aren’t needed. This will not only reduce the attack surface, it will also increase battery life.
  6. Have users turn off Bluetooth, Wi-Fi, and GPS when not specifically in use.
  7. Have users connect to the corporate network through an SSL VPN.
  8. Consider deploying smartphone security, monitoring, and management software such as that offered by Juniper Networks for Windows Mobile, Symbian, iPhone, Android, and BlackBerry.
  9. Some smartphones can be configured to use your rights management system to prevent unauthorized persons from viewing data or to prevent authorized users from copying or forwarding it.
  10. Carefully consider a risk/benefits analysis when making the decision to allow employee-owned smartphones to connect to the corporate network.

Debra Littlejohn Shinder, MCSE, MVP is a technology consultant, trainer, and writer who has authored a number of books on computer operating systems, networking, and security. Deb is a tech editor, developmental editor, and contributor to over 20 additional books on subjects such as the Windows 2000 and Windows 2003 MCSE exams, CompTIA Security+ exam, and TruSecure’s ICSA certification. She has authored training and marketing material, corporate whitepapers, training courseware, and product documentation for Microsoft Corp. and other technology companies. Deb currently specializes in security issues and Microsoft products.

6 ways to get free information about Word

Microsoft Word


6 ways to get free information about Word

Hiring someone to train your troops to use Word is a great idea, but there won’t always be a trainer nearby. Fortunately, there are a number of ways users can get help, for free.  Chances are it might take a bit of research, but you can usually find the help you need, with a bit of perseverance.

1. [F1]
The first line of defense is [F1]. Press [F1], enter a few descriptive words, such as “change style”, or “delete header”. Word will display a list of help topics, based on your input. Sometimes this works great and sometimes the results are inconsistent. However, it’s the best place to start, because sometimes the answer pops right up!

[F1] is available in all Office applications. You must install Help for these files to be available.

2. Microsoft Answers
Microsoft Answers is a free support site (forum). If you want to search available posts, enter a question in the Find Answer control. If you don’t find what you need, click Ask A Question (at the bottom of the page). You have to sign in using your Windows Live ID. If you don’t have one, there’s a link for that too.

Microsoft Answers supports Office, not just Word.

3. Word MVPs
MVP’s are volunteers who share their expertise, worldwide and for free. Microsoft honors those who stand out with the MVP title. MVPs really know their stuff and there are two ways to benefit from their expertise and generosity. First, visit The Word MVP Site. There’s a lot of information readily available. If you don’t find an answer, click Contact, read the instructions and submit your question. There’s no guarantee anyone will respond, but it can’t hurt.  However, try to find the answer yourself first. You’re probably not going to get a response to a question that’s answered by an existing Help file. By all means, please be polite. These folks provide this service for free.

In addition, MVP Web Sites lists current MVP’s with links to their sites. You can’t submit questions, but you will find valuable information.

4. Microsoft Knowledge Base
A long-time favorite support site is Microsoft’s Knowledge Base. This is a huge database of articles that offers how-to instructions, workarounds for bugs, and so on. The articles are a bit dry and sometimes, difficult to follow, but you’ll usually find something you can use. There’s even an article on how to use the Knowledge Base!

5. Microsoft Word Help and How-to
Word Help and How-to is another site supported by Microsoft. Use keywords to search the available files. You won’t get personalized answers, but you might find just what you need.

6. Listservs
My favorite resource is a listserv; I’m a member of many. If you’re not familiar with the term, a listserv is an email server (group). You send messages and other members respond, all via email. Yahoo! Groups is a good place to start, but there are private listservs as well. Search on “Microsoft Word” and see what’s available.

It might take a while to find just the right group. In addition, they’re a bit like potato chips. Joining one inevitably leads to joining more–you’ve been warned!

Microsoft Excel


Keep users from selecting locked cells in Excel

Most of us create custom workbooks that others update. You probably protect the sheets and unlock only the input cells. That way, users can’t accidentally delete or change formulas and other flag values. The worst they can do is enter invalid values.

Unlocking input cells and protecting sheets is a simple enough process, but a truly knowledgeable user can get around it. For those users, there’s a simple macro for resetting things. First, let’s unlock input cells in the simple sheet shown below.

In this particular sheet, users only need to update two cells: B1 and B2. You’ll want to unlock your input cells, as follows, before you protect the sheet:

  1. Select the input cells. In this case, that’s B1:B2.
  2. Right-click the selection and choose Format Cells from the resulting context menu.
  3. Click the Protection tab.
  4. Uncheck the Locked option.
  5. Click OK.

The next step is to protect the sheet as follows:

  1. From the Tools menu, choose Protection, and then select Protect Sheet. In Excel 2007 and 2010, click the Review tab | Protect Sheet (in the Changes group).
  2. Enter a password.
  3. Uncheck the Select Unlocked Cells option.
  4. Click OK.
  5. Enter the password a second time to confirm it.
  6. Click OK.

At this point, you can select and change the contents of cells B1 and B2. You can’t select any other cells but B1 and B2.

As I mentioned, it won’t always matter if a user can select locked cells. On the other hand, the setup I’m suggesting creates an easy-to-follow data entry map. There’s no confusion for the user–the only updateable cells are those the user can select.

This much you might already know. What’s a bit scary is that a user can quickly undo the selection property as follows:

  1. From the View menu, choose Toolbars.
  2. Select Control Toolbox.
  3. Click the Properties tool.
  4. In the properties window, change the EnableSelection property to 0-xlNoRestriction.
  5. Click OK.

Users can also access this property via the VBE. In Excel 2007 and 2010, the user can display the Developer tab (via the File | Customize Ribbon route) and click Properties in the Controls group.

After resetting the EnableSelection property to 0, users can select any cell in the sheet, but they still can’t alter cell contents, except for the cells you unlocked before protecting the sheet. This doesn’t seem all that important, unless your users don’t know what they’re supposed to do. In this simple sheet, the input cells are clear, but a complex sheet with non-contiguous input ranges will certainly be more confusing.

To reclaim the original settings, include two macros: One that resets the property when the workbook is opened and a second that resets the property when the selection in the sheet changes. Open the Visual Basic Editor and double-click ThisWorkbook in the Project Window. Then, enter the following macro:

Private Sub Workbook_Open()
  'Disable locked cells in IndirectEx sheet.
  Worksheets("IndirectEx").EnableSelection = xlUnlockedCells
End Sub

That macro will reset the property when the workbook is opened. That way, users always start with the right setting. To add the macro that acts on a selection change in the actual sheet, double-click the sheet (by name) in the VBE Project window and enter this macro:

Private Sub Worksheet_SelectionChange(ByVal Target As Range)
  'Reset if user manages to disable enable selection property.
  Worksheets("IndirectEx").EnableSelection = xlUnlockedCells
End Sub

The only difference is the event that executes each macro. The SelectionChange event fires when a user changes the cell selection (only in the specified sheet, not throughout the entire workbook). Users won’t notice it at all unless they manage to disable the EnableSelection property (as described earlier). Then, the user will be able to select a locked cell. Doing so will execute the macro, which will reset the property.  The user will be able to select only one locked cell before the macro resets the property.

The truth is the user that’s smart enough to get around your locked cells might know how to circumvent your macros– but they’re worth a try.

Microsoft PowerPoint


Quick keyboard shortcuts for the Access Navigation Pane

PowerPoint provides a number of pre-defined backgrounds but you might want to use an image of your own. Fortunately, PowerPoint is accommodating; it’s easy to repeat a custom image across a slide’s background. For instance, the following image is a .png file created in Paint. PowerPoint will have no problem working with it. This file is relatively small at 182 by 282 pixels and 2881 bytes. Work with the smallest files possible.

Once you have an image file, you’re ready to insert it, as follows:

  1. Right-click a slide’s background and choose Format Background.
  2. Click the Picture Or Texture Fill option.
  3. Click the File button (under Insert From).
  4. Use the Insert Picture dialog to locate the file. Select the file and click Insert.
  5. Click the Tile Picture As Texture option.
  6. Click Close to apply to the current slide. Click Apply to All and then click Close to apply to all the slides in the presentation.

It takes a few more clicks in PowerPoint 2003:

  1. Right-click a slide’s background and choose Background.
  2. Click the dropdown under Background Fill and choose Fill Effects.
  3. Click the Texture tab and then click the Other Texture button.
  4. Use the Select Texture dialog to locate the file. Select the file and click Insert.
  5. Click OK and then click one of the Apply or Apply To All.

To save the background as a separate file, right-click the background and choose Save Background.

The image in this example is to busy to actually use as a background. The busy-ness of this example simply shows how easy it is to work with an abstract pattern. Insert the file as a texture and PowerPoint does all the rest. It couldn’t be simpler!

Update VMware Tools from PowerCLI

For vSphere installations, the VMware Tools drivers allow virtual machines to connect to the ESX or ESXi hypervisor for optimal performance, as well as take advantage of all of the current virtual devices. Each incremental update of VMware ESXi or ESX may incur an update for the VMware Tools installation for the guest virtual machines. Keeping VMware Tools up to date can be a task that gets away from you quickly.

One fast and repeatable way to update VMware Tools is to use PowerCLI, VMware’s PowerShell implementation. A number of commands (Cmdlets) are available to make quick work of this task; for instance, the Update-Tools Cmdlet in PowerCLI allows a guest to receive an update to VMware Tools.

To utilize this Cmdlet, we’ll take an example of the DROBO-WS2K8R2-SQL2K8 virtual machine with an out of date VMware Tools installation (Figure A).

Figure A

Click the image to enlarge.

The following PowerCLI string will update VMware Tools on the virtual machine in question:

Update-Tools -NoReboot -VM DROBO-WS2K8R2-SQL2K8 -Server VC4.RWVDEV.INTRA

In this example, the VM is specified, as well as the vCenter Server (VC4.RWVDEV.INTRA). When the command is processed, it is displayed in the vSphere Client (Figure B).

Figure B

Click the image to enlarge.

Note that the -NoReboot option was specified during this iteration of the Cmdlet and is new to the PowerCLI implementation that came with vSphere 4.1. While it will not reboot the virtual machine, there will be an impact to the Windows guest operating system. A VMware Tools upgrade will in most situations update the driver for the network interface within the virtual machine; this will cause a momentary loss of network connectivity of the guest virtual machine that is self-recoverable yet noticeable. Keep this in mind when using the script.

If you need to update multiple virtual machines, several options can be selected. The most easy to execute would be the wildcard in the -VM value. A line for each virtual machine could also be done to deliver explicit implementations.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

Secret to succeeding with social media apps in your organization

There isn’t a serious IT leader on the planet who isn’t interested in figuring out how to capture the power and benefits of social media applications for their enterprise. Now before you get the wrong idea, let me be clear: I’m not talking about mining social media sites like Facebook or trying to open your enterprise to social media applications.

What I am referring to is the capturing essence of Facebook and other social media hubs, i.e., creating a collaborative, user-driven environment that connects people to a common purpose. In this context, we’re connecting people who work for the same company in order to work better, faster, and easier. And in so doing, we’re streamlining and promoting communication, information distribution, collaboration, and community building in much the same way that Facebook does­ by moving people on to a central platform for messaging and information sharing.

Now, it’s not that there aren’t any applications for doing this. There are plenty of them. Most of them fall under the heading of collaboration platforms and provide tools for building communities, authoring and sharing content, managing projects, and collaborating in truly visionary ways. The problem is that full-scale adoption of this collaborative approach has hardly caught on.

For many, especially the over-35 crowd, using these systems falls on par with the joy of filling out a timesheet–just another cumbersome task that has to be done; another process getting in the way of real work. And the result, no surprise, is that most knowledge workers (as we are now known) take any opportunity to work around these systems and avoid these applications altogether. In the absence of a powerful mandate, these applications languish on the sidelines or receive marginal use at best.

Personally, I’m a huge supporter and user of this new generation of collaborative software. I have been very close to a number of implementations (including our own in-house transformation), and I have experienced firsthand just how powerful they can be. More importantly, I believe that I have discovered the secret to success with this type of change. Are you ready? It’s gonna shock you at first, so stay with me.

The secret to success
OK, here it is: Disable e-mail attachments. That’s right, stop allowing people in your company to send an attachment along with their e-mails to anyone inside your company. (You’ll have to leave the ability for communicating with outsiders, of course.) If you have the influence (or guts) to pull it off, I promise it will drive adoption of your collaboration application so quickly you won’t believe it. Here’s why:

At the heart of all true collaboration applications is the basic understanding that we work together on ideas and these ideas are born, take shape, and live in documents. From the earliest stages of idea generation (whether as Word, PowerPoint, Excel, Flowchart, MindMap, whatever), collaboration apps encourage users to get material off their local hard drives and into a platform where they belong to all. In short, collaboration applications represent the critical path to true group thinking and working.

But, and this is a big but, in order for these applications to work, they have to be used regularly and properly. Documents have to reside on the platform. And that’s exactly where the problem lies. Most people are not accustomed to working this way. They can’t be bothered to get content onto a collaboration platform. They believe they have a quick and easy way for collaborating without the overhead–it’s called e-mail. And human nature ain’t on your side when it comes to beating this one.

I could go on and on about all the wonderful benefits available to users and companies that embrace collaboration platform–commenting, notification, version control, search, and so much more–but the prevalent truth is that wide-scale adoption is still the exception, not the rule. (God knows the vendors are working it day in and day out.) As in many other cases, adoption of collaboration platforms lags because tomorrow’s potential benefits don’t seem to offer enough to pull users away from today’s quick-and-dirty process.

Case study–a law firm takes the plunge
I have seen extremely smart lawyers suffer document-version screwups multiple times at a cost of hundreds of hours of rework (that means tens of thousands of dollars unbilled) and still avoid using the firm’s collaboration platform.

All that changed for one firm when a senior partner, fed up with the situation and associated costs, politely refused to read anything e-mailed to him as an attachment. To boot, he didn’t e-mail attachments either.  If his colleagues wanted to collaborate and work with him–and since he was the senior partner, they certainly did–there was no other choice but to use the collaboration platform. His position: If the document was worth his time, it was worth a two-minute investment for the “sender” to work though the platform.

Sure enough, within 60 days the firm was transformed. Everything, and I mean everything, moved onto the collaboration platform. And then, the magic started to happen. Document comments started flying around; stringing one-off thoughts into actual discussions. Version control worries became a thing of the past.  New ideas began popping up in the company wiki, and a simple, but effective, task management process came to life on its own. Here’s the best part: No one, and I mean no one, ever sent another “Could you send me that file?” or “Is this the latest version” e-mail. All this happened because the central building blocks of the platform, the intellectual property of the firm, was on the platform and not being passed around via e-mail.

Today if you ask anyone at the firm about the platform, they would say that they couldn’t work without it and that going back to e-mail-centric collaboration would be a painful setback to their productivity. Success! And the best part of it all: Internal e-mail went back to being used for what it was originally intended–brief, quick, one-to-one messages. Anything more substantial goes on to the collaboration platform from the start.

The takeaway
I know it sounds a bit extreme and you may not be able to pull it off completely in your organization. Nonetheless, you may be able to apply the lesson in a more limited way. Perhaps take baby steps–a day or a week without attachments–as a pilot. One thing is for certain: if you’re successful in getting people over to the other side, once they cross over, it doesn’t take long at all for them to stop wishing there was a way back.

Marc J. Schiller is a leading IT thinker, speaker, and author of the upcoming book The Eleven Secrets of Highly Influential IT Leaders. Over the last 20 years he has helped IT leaders and their teams dramatically increase their influence in their organization and reap the associated personal and professional rewards.

Use Sysinternals Active Directory Explorer to make a domain snapshot

Active Directory is one of Microsoft’s best products ever in my opinion. It allows for an incredible amount of control of computer and user accounts, and there is so much more under the hood.

The free Sysinternals Active Directory Explorer tool allows administrators to quickly look at information for the entire domain, as well as take a snapshot for comparison at a later date. The tool should not replace any of the Active Directory tools for everyday use, but rather supplement them for snapshots or a view into specific configuration.

Once Active Directory Explorer is installed, the basic authentication screen appears to connect to a database (Figure A).

Figure A

Click the image to enlarge.

It’s not ideal, but you can create objects, such as a user account, within the Active Directory Explorer tool (Figure B).

Figure B

Click the image to enlarge.

Creating a snapshot of the Active Directory domain (Figure C) will export the entire directory as a .DAT file on local disk.

Figure C

Click the image to enlarge.

You can then apply the snapshot as a comparison to the live configuration of the domain; this is a great way to see what has changed. This can also be a much more comfortable alternative to investigate what has changed rather than seeking out a wholesale of the domain or even selected objects, which can be very impactful to the state of user and computer accounts. Figure D shows a comparison of the snapshot to a live domain being prepared.

Figure D

Click the image to enlarge.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

5+ tips to ensure PCI DSS compliance

On occasion, I help a friend who owns several businesses. His latest venture is required to comply with the Payment Card Industry Data Security Standard (PCI DSS). My friend is computer savvy. So between the two of us, I assumed the network was up to snuff. Then went through a compliance audit.

The audit was eye opening. We embarked on a crash course in PCI DSS compliance with the help of a consultant. My friend thought the consultant could help prepare for the mandatory adoption of PCI DSS 2.0 by January 1, 2011.

The PCI Security Standards Council defines PCI DSS this way: “The goal of the PCI Data Security Standard is to protect cardholder data that is processed, stored, or transmitted by merchants. The security controls and processes required by PCI DSS are vital for protecting cardholder account data, including the PAN–the primary account number printed on the front of a payment card.”

The consultant’s first step was to get familiar with the network. He eventually proclaimed it to be in decent shape, security-wise. Yet the look on his face told us there was more. Sure enough, he went on to explain that more attention must be paid to protecting cardholder data.

Back to school
The consultant pointed out that PCI DSS consists of 12 requirements. These requirements are organized into six guides. Although the requirements are for PCI DSS compliance, I dare say the guides are a good primer for any business network, regardless of whether PCI DSS is a factor. With that in mind, I’ve used the guides as the basis for these tips.

1: Build and maintain a secure network
Guide 1 states the obvious, and books have been written on how to secure a network. Thankfully, our consultant gave us some focus by mentioning that PCI DSS places a great deal of emphasis on the following:

  • Well-maintained firewalls are required, specifically to protect cardholder data.
  • Any and all default security settings must be changed, specially usernames and passwords.

Our consultant then asked whether my friend had offsite workers who connected to the business’s network. I immediately knew where he was going. PCI DSS applies to them as well–something we had not considered but needed to.

2: Protect cardholder data
Cardholder data refers to any information that is available on the payment card. PCI DSS recommends that no data be stored unless absolutely necessary. The slide in Figure A (courtesy of PCI Security Standards Council) provides guidelines for cardholder-data retention.


Figure A

One thing the consultant stressed: After a business transaction has been completed, any data gleaned from the magnetic strip must be deleted.

PCI DSS also stresses that cardholder data sent over open or public networks needs to be encrypted. The minimum required encryption is SSL/TLS or IPSEC. Something else to remember: WEP has been disallowed since July 2010. I mention this as some hardware, like legacy PoS scanners, can use only WEP. If that is your situation, move the scanners to a network segment that is not carrying sensitive traffic.

3: Maintain a vulnerability management program
It’s not obvious, but this PCI DSS guide subtly suggests that all computers have antivirus software and a traceable update procedure. The consultant advised making sure the antivirus application has audit logging and that it is turned on.

PCI DSS mandates that all system components and software have the latest vendor patches installed within 30 days of their release. It also requires the company to have a service or software application that will alert the appropriate people when new security vulnerabilities are found.

4: Implement strong access control measures
PCI DSS breaks access control into three distinct criteria: digital access, physical access, and identification of each user:

  • Digital access: Only employees whose work requires it are allowed access to systems containing cardholder data.
  • Physical access: Procedures should be developed to prevent any possibility of unauthorized people obtaining cardholder data.
  • Unique ID: All users will be required to have an identifiable user name. Strong password practices should be used, preferably two-factor.

5: Regularly monitor and test networks
The guide requires logging all events related to cardholder data. This is where unique ID comes into play. The log entry should consist of the following:

  • User ID
  • Type of event, date, and time
  • Computer and identity of the accessed data

The consultant passed along some advice about the second requirement. When it comes to checking the network for vulnerabilities, perform pen tests and scan the network for rogue devices, such as unauthorized Wi-Fi equipment. It is well worth the money to have an independent source do the work. Doing so removes any bias from company personnel.

6: Maintain an information security policy
The auditor stressed that this guide is essential. With a policy in place, all employees know what’s expected of them when it comes to protecting cardholder data. The consultant agreed with the auditor and added the following specifics:

  • Create an incident response plan, since figuring out what to do after the fact is wrong in so many ways.
  • If cardholder data is shared with contractors and other businesses, require third parties to agree to the information security policy.
  • Make sure the policy reflects how to take care of end-of-life equipment, specifically hard drives.

Final thoughts
There is a wealth of information on the PCI Security Standards Council’s Web site. But if you are new to PCI DSS, or the least bit concerned about upgrading to 2.0, I would recommend working with a consultant.

5 tips to help prevent networking configuration oversights

I don’t know about you, but I find myself forgetting the same things over and over, a case of deja vu and amnesia at the same time: “I think I forgot this before!” When it comes to networking configuration, small errors happen most frequently. Here are some of the networking configuration errors I often encounter, along with what I’m doing to reduce the chances of their happening again.

1: Subnets other than 24-bit
How many subnets do you have that are something other than a 24-bit netmask (255.255.255.0)? I don’t work with many subnets other than the standard class C network, but every time I do, I have to double-check myself to make sure the correct subnet mask is applied. I’m trying to find reasons to use subnets other than the venerable 24-bit mask, but the reasoning becomes uncertain in most internal IP address spaces with non-routable IP addresses.

2: DNS suffix lists
Having a complicated list of DNS suffixes and missing one or more of the entries can make name resolution less than pleasant. The good news is that we can fix this via Windows Group Policy to set a primary suffix and suffix search-order for each computer account.

3: Default gateway other than .1
Each time a static IP address is configured on a network that has a default gateway other than .1, I get a little confused and have to double-check the configuration. For subnets smaller than 255 hosts (a class C subnet), the chances are higher that the last octet of the IP address space will not permit a .1 default gateway. The fix can be to standardize on class C subnets for internal networks, even if there are wasted IP addresses at the end of the range.

4: DNS IP addresses
If I had my way, every DNS server at every site would have the same IP address structure as every other site. That way, I would have to determine only the first two or three positions of the IP address and the DNS servers would be easy to figure out. I’m game for anything I can do to standardize. For example, if every network has a .1 default gateway, .2 can be the DNS server for that network. That, I can remember.

5: WINS in all its glory
I can ping the server by fully qualified domain name, but I can’t access just the NetBIOS name. A number of things can be wrong, including WINS configuration. A properly configured set of DNS suffixes and search orders can often address this. But one way to avoid the issue is to implement the globalnames zone with Windows Server 2008’s DNS engine.

Easy printer sharing in GNOME

Do you remember how challenging sharing printers could be back when you had to manually configure your smb.conf file to include shared printers? Well, those days are over with the latest incarnations of the GNOME desktop. Like folder sharing, printer sharing has been made very simple and can be done completely within a GUI. Let’s see just how this is done.

Assumptions
I will assume that you already have the printer attached to the local machine and it is printing just fine. I will also assume the machine the printer is attached to is the Linux machine that will share the printer out. If that is all the case, you are ready to begin the sharing process.

How to share out a printer
The first thing to do is to click System | Administration | Printing. When this new window opens, right-click the printer you want to share and select Properties. From the Properties window click the Policies tab (see Figure A) and then make sure the following are checked:

  • Enabled
  • Accepting Jobs
  • Shared

Figure A

Once you have those items checked, click OK.

The next step is to configure the CUPS server settings. To do this go back to the main Printing window and click Server | Settings. In this new window (see Figure B) make sure the following items are checked:

  • Publish shared printers connected to this system.
  • Allow printing from the network.

The rest of the settings are optional.

Figure B

Once you click OK your printer should be ready to use by remote machines. Of course how you connect to this shared printer will be dictated by the operating system you are trying to connect from.

Issues
Obviously there may be issues–depending upon the OS you are using. For example if you are connecting from a Windows 7 operating system, you may need to make a single change to your smb.conf file (yes, there will be a manual edit in this case). The edit in question is this:

  1. Search for the [printers] section.
  2. Change the line browseable = no to browseable = yes.
  3. Restart Samba.

That’s it. Once you make that change you should be able to then see your Printers from Windows machines.

Final thoughts
Sharing out printers used to be a challenge for Linux users. Thanks to modern desktops like GNOME (and a much easier to administer Samba), printer sharing has become far easier than it once was.

Jack Wallen was a key player in the introduction of Linux to the original Techrepublic. Beginning with Red Hat 4.2 and a mighty soap box, Jack had found his escape from Windows. It was around Red Hat 6.0 that Jack landed in the hallowed halls of Techrepublic.


Optimize data access for mobile phone apps

I’ve been experimenting with Windows Phone 7 development, and I have not been 100% happy with the process. (For details, read My first Windows Phone 7 app development project and The Windows Phone 7 App Hub experience.) However, an interesting aspect of my experiment is that the limitations of mobile devices (and Windows Phone 7 specifically) are forcing me to dust off some old-school performance techniques.

The major limitation I encountered with Windows Phone 7 is that it does not support WCF Data Services natively. There is a library to remedy this problem, but unfortunately, it does not support authentication, and many data services will need authentication. You can manually make the request and parse the XML yourself if you really want to, but it is clumsy.

The other issue is that, as the publisher, publishing data via Web services have ongoing costs directly linked to usage rates, but the App Hub publishing model does not allow for subscription fees at this time. If your application is popular, the last thing you need is to be selling an app for 99 cents that costs you 20 cents per user per month to operate.

Another concern with using Web services is that the Windows Phone 7 application guidelines are very strict about delays related to networking; you cannot make these requests synchronously, and you must have provisions for cancellation for “long running” operations. In my experience, an application was rejected because it called a service that never took more than a second or two to return with results, and I needed to provide a Cancel button for that scenario.

Because trying to access Web services is so clumsy right now, and you need to be mindful of the need to support cancellation, an attractive alternative is to put the data locally and work with it there.

Unfortunately, Windows Phone 7 also lacks a local database system. At best, your local data options are XML files and text files.

If do this kind of work on a desktop PC, it’s not a big deal; by default, people will just throw it into an XML file and have a great day. The problem is that XML is a very inefficient file format, particularly on parsing and loading, and mobile devices lack CPU power. Depending on how much data you have, your application could be very unresponsive while loading the data, which will get your application rejected or force you to support cancellation for loading the data. And honestly, what is the point of a data-driven app where the users cancel the data that is being loaded?

So I’ve been digging into my bag of tricks (I’m glad I remember Perl), and here are some ways you can load data with a lot more speed than parsing XML.

  • CSV: CSV is a tried and true data format, and it loads very fast. There are a zillion samples on the Internet of how to work with CSV data.
  • Fixed width records: If you need even more speed than CSV offers, and you are willing to give up some storage space in the process, fixed width records are even faster than CSV. You can find lots of examples online of how to implement a data system using fixed width records.
  • Indexing: You can create a simple indexing system to help locate your records in a jiffy. If your application only needs to read data, this is downright easy. Indexing provides awesome speed boosts with fixed width records since you can read the index location, multiply the row number by the record size to get the byte offset, and move directly there. It can provide an advantage for delimited files too, but usually only if you need to parse the records to find the data without the index. Load the index into RAM for additional benefits.
  • Data file partitioning: Sometimes data can logically be split amongst smaller files, which can help your performance with delimited data files. For example, if you have data that can be grouped by country, put each country’s data into a separate file; this way you reduce the number of reads needed to find data, even if you know what line it is on. Fixed width records with an index usually will not benefit from data partitioning, since they can directly access the data.

Disclosure of Justin’s industry affiliations: Justin James has a contract with Spiceworks to write product buying guides; he has a contract with OpenAmplify, which is owned by Hapax, to write a series of blogs, tutorials, and articles; and he has a contract with OutSystems to write articles, sample code, etc.

Justin James is an employee of Levit & James, Inc. in a multidisciplinary role that combines programming, network management, and systems administration. He has been blogging at TechRepublic since 2005.

How to use Microsoft Excel’s RANK() function

Microsoft Excel


How to use Microsoft Excel’s RANK() function

Excel’s RANK() function returns the rank of a value within the context of a list of values. By rank, I mean a value’s relative position to the other values in the list. You could just sort the list, but that’s not always practical and doing so won’t return a rank, although you can easily see which values rank highest and lowest in a sorted list.

The figure below shows the RANK() function at work in a simple spreadsheet. The function in cells F2:F5 returns the rank of the four values in E2:E5. Those values are the result of the following SUMIF() function:

=SUMIF($A$2:$A$9,$D2,$B$2:$B$9)

The SUMIF() returns a total for each individual listed in column A. (You can recreate this spreadsheet or work with a simple column of values.)

About RANK()
The RANK() function has three arguments:

RANK(number,reference,[order])

where number is the value you’re ranking, reference identifies the list of values you’re comparing number against, and order specifies an ascending or descending rank. If you omit order, Excel assumes the value 0, which ranks values in descending order. Any value other than 0 ranks in ascending order. In this example, I enter the following function into cell F2:

=RANK(E2,$E$2:$E$5)

Notice that number is relative but reference is absolute. You’ll want to maintain that structure when applying this to your own spreadsheet. Copy the function in F2 to F3:F5. The largest value, 120, returns a rank of 1. The lowest value, 98, is 4. To reverse the ranking order, include order as follows:

=RANK(E2,$E$2:$E$5,1)

Understanding a tie
Something you’ll want to watch for is a tie. RANK() will return the same rank for a value that occurs more than once. Interestingly, RANK() accommodates the tie by skipping a rank value. For instance, the following spreadsheet shows what happens when both Alexis and Kate have the same value (101). The rank for both is 2 and there’s no rank of 3. The lowest value still ranks as 4.

There’s no argument to change this behavior. If a tie isn’t valid, you must find a second set of criteria to include in the comparison.

Microsoft Outlook


Display multiple monthly calendars in the Date Navigator

By default, Outlook displays just 1 month in the Date Navigator. By stealing a bit of space from other areas, you can display more. If you want to keep the Date Navigator in the Navigation Pane, just drag the right and bottom borders to allow more room. Make My Calendars and Mail (also in the Navigation Pane) as small as possible to free up the most room.

If the Date Navigator is in the To-Do Bar (new to Outlook 2007), do the following:

  1. Right-click the bar and select Options.
  2. Change the Number of Months Row from 1 (the default) to 3 or 4 (up to 9).
  3. Click OK and Outlook will display four rows of calendars in the Date Navigator.

To see even more calendars in the Date Navigator, drag the border between the To-Do Bar (now mostly Date Navigator) and the calendar view to the left. Doing so will fill the Date Navigator with more monthly calendars, automatically. Changing the row option isn’t always enough–you might have to change your screen resolution to see them all.

Microsoft Word


Tips for wrapping text around a Word table

Most of us tend to layer a table between paragraphs of text–I know I usually do. The figure below shows the typical placement of a simple table in a document. The table follows a paragraph of explanatory or introductory text.

You might not realize that you can position a table in a paragraph and wrap text around the table. This next figure shows the result of dragging the table into the paragraph. By default, the table’s Text Wrapping property is None and the table aligns to the left margin of the page. When I dropped it into the paragraph, Word changed the property so Word could wrap the text around the table. Word does the best it can, but the results aren’t always a perfect fit. Fortunately, you’re not stuck.

The first thing you can do is move the table around a bit more–especially if the placement doesn’t have to be exact. By moving the table around just a little, you’ll probably hit upon a better balance. (Most likely, I wouldn’t break up the middle of a paragraph with a table, but for the sake of the example, please play along.)

Word does a good job of defining properties when you drag the table to position it. However, if a little drag action doesn’t produce a mix you can live with, you can force settings that are more exact. To access these properties, right-click the table, choose Table Properties, and click the Table tab (if necessary). First, make sure the Text Wrapping property is set to Around. If you want the table flush to the left or right, change the Alignment to Left or Right. The example table is centered.

Click the Positioning button. In the resulting Table Positioning dialog box, you can set the following properties:

  • The horizontal position of the table, relative to a column, margin, or page.
  • The vertical position of the table, relative to a paragraph, margin, or page.
  • The distance of the table from the surrounding (wrapped) text.
  • Whether the table should move with the text.
  • Whether the text can overlap the table.

The best way to learn about these properties is to just experiment. For instance, setting a Right property of 3 removes the text to the right of the table–remember when I said I probably wouldnot want a table to break up text? Well, this is one way to get the text inside the paragraph, without breaking up the text. I just reset one property!

As you experiment, you’ll probably find, as I have, that dragging a table around produces a pretty good balance. It’s good to know though, that you can force things along a bit by setting the positioning properties.

Remove virtual machine swap space on disk

The use case is rare, but it may be necessary to not utilize a virtual swap file for VMware vSphere virtual machines.

Each virtual machine in vSphere is subject to a number of memory management technologies, which include the balloon driver through VMware Tools, transparent page sharing on the host, memory compression, and hypervisor swapping. (The technologies are in order of most desirable to least desirable features.) The hypervisor swapping function makes the virtual machine’s memory inventory run on disk instead of addressable space in the host’s RAM inventory.

A virtual machine creates a swap file on disk; this is separate to anything that may be configured in the operating system, such as a Windows page file on the guest virtual machine. This swap file (Figure A) is equal to the memory allocation on the virtual machine.

Figure A

Click the image to enlarge.

This particular virtual machine only has 4 MB of memory (it is a low-performance test system), but the 4 MB of RAM is also represented on the VMFS datastore (LUN-RICKATRON-1) as a .vswp file. While the 4 MB for this virtual machine is not too impactful on most storage systems, larger virtual machine memory provisions can chew up datastore space and (hopefully) never be used.

If you don’t want to have this .vswp file on the storage at all, there is one way to prevent the virtual machine from representing the physical memory allocation on disk. Using a memory resource reservation for the entire amount of memory for the virtual machine would not allow it to power-on unless the host can provide, exclusively, the reserved amount of memory. In that situation, the guest would never result to swapping (Figure B).

Figure B

Click the image to enlarge.

Once the reservation is made, the next time the virtual machine is powered on, it will not claim the .vswp file on the datastore.

Note: This configuration should be used in very specific situations, such as a tier 1 application that you will forgo the benefits of VMware memory management to ensure the absolute highest performance.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

How to market your internal IT department

Marketing their organization is something most CIOs spend little time considering. While no one likes a shameless promoter, many of the most successful IT organizations I have worked for actively market themselves around the corporation, even if they may not use the term “marketing” to describe their activities.

Without some element of marketing, IT will often be neither seen nor heard, unless summoned, save for the rumor-mill rehashing of its most recent stumble or failure. With some simple marketing efforts, the company as a whole can be reminded of the services IT can offer, informed of recent successes, and be seen as a home to thought leadership on technology.

Here are a few simple ways to market your internal IT organization with little to no marketing budget and a minimal investment of time.

Change your attitudeThe most effective leaders in any organization are those who can sell their vision. While it may seem crass to call every great leader an effective salesperson, it is largely true. Effective leaders can pitch their point, expound on the benefits that are most likely to appeal to the current listener, and then “close the deal” with the support of much of the organization.

This “sales” attitude permeates everything from management presentations, to structuring organizational efforts that appeal directly to potential “customers”. IT, especially, is a group that peddles ideas and every interaction with other business units is a chance to pitch your most compelling ideas can do wonders for how you structure a proposal and present its benefits.

While something like enterprise software might affect the whole organization, a change in attitude will cause you to present the package differently to, say, the operations team rather than to the finance department. This will cause you to have laser-like focus on appealing to the listener’s interests, rather than self-centered technical discussions or questionable and unconvincing “benefits”.

Drop the jargonThe most effective marketing reaches us in a language we can easily understand. The same product description will use different language and imagery when targeted at one group versus another, but in each case will appeal to those groups in their own terms.

While IT professionals like us may get excited by talk of virtualized cloud services and ITIL frameworks, the people impacted by these technologies usually care less about the fancy verbal footwork and simply want to know how their working lives will be improved by what we are peddling. When we can separate the benefits from the technologies that deliver them and effectively articulate those benefits, then IT will be best presented and most easily accepted and embraced.

Become a thought leaderTechnology, especially in the consumer space, is changing at a record pace. Most of us have been cornered and asked for an opinion on some new gadget or technology making the press’s rounds.

Rather than waiting for these ad hoc “hallway moments”, publish an informal newsletter that talks about some of IT’s recent successes that address current technology trends. There’s no shame in having a young staffer who is passionate about the latest mobile technology pen a couple of paragraphs about how Android could affect the company or how some apps could help the iPad become a productivity tool. If CIOs are not presenting this information, executives may be looking to teenage children or staffers outside IT, making corporate IT look like a dated dinosaur rather than a trend-spotter.

An IT newsletter need not be an overwrought, 10-page affair with marvelous graphics. It can start as a simple four or five paragraphs that are e-mailed to a handful of colleagues. The best newsletters are usually informal and informational that address the concerns of readers. Ask a trusted colleague or two what technologies they are following and interested in learning more about. Combine this with short and subtle promotional features about IT’s recent successes, and you have a winning formula that presents IT as competent and knowledgeable. Old-fashioned e-mail is usually a better tool than a blog buried on an internal Web site that few will read, and if you are comfortable with it, self-effacing humor and an informal style will gain more readers than a staid yawner that reads like a thesis.

While marketing is probably one of the last things you thought you would need to worry about as an IT executive, any organization, whether it is a Fortune 100 company or an IT department of five people at a small company, can benefit from being presented in the best possible light. Dedicating four or five hours each month to these activities can build trust in the IT department, improve its image, and even make the next budget-approval process far less painful.

Patrick Gray is the founder and president of Prevoyance Group, and author of Breakthrough IT: Supercharging Organizational Value through Technology. Prevoyance Group provides strategic IT consulting services to Fortune 500 and 1000 companies.

Disable Windows Update for device driver installation

When new hardware is installed on a Windows Server, there are a number of options to consider, such as which driver to use and whether to utilize Windows Update for the driver. There are options for specifying this behavior.

For standalone systems, the Device Installations settings option from System properties can dictate behavior for device installation. Figure A shows this option for standalone systems.

Figure A

If the option needs to be centrally managed for a number of computer accounts, Group Policy can configure this centrally. Device installation behavior can be managed in Group Policy in the Computer Configuration | Policies | Administrative Templates | System | Device Installation section. From there, the Specify Search Order For Device Driver Source Locations value and a number of other behavior values can be configured to dictate how drivers are installed on servers. In the case of disabling Windows Update, enabling this value is shown in Figure B.

Figure B

It can be very important to set this type of configuration for client systems as well as server systems. Storage systems, for example, may be very peculiar on supported versions of drivers for devices such as fiber channel host bus adapters (HBAs) that work with a storage processor driver that manages multipathing.

Other devices such as tape drives may have specific driver requirements for interaction with HBAs, SCSI, or SAS controller interfaces. On the client side, printers can be the primary target on driver revision control. This same area of Group Policy can be used to specify additional options on device installation behavior.

Rick Vanover (MCITP, MCSA, VCP, vExpert) is an IT Infrastructure Manager for a financial services organization in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

Mobile banking apps may be vulnerable

Banking apps for mobile devices are increasing in popularity. Estimates by financial services firm TowerGroup suggest there will be 53 million people using mobile banking apps by 2013.

My bank recently rolled out its own iPhone app. I downloaded it and was just about to check it out. Then, paranoia. If you read my article about whether online banking is safe or not, you will understand. What do I know about this app?

So, I started looking into mobile banking apps. It did not take long to find out security advocates also have their concerns. Spencer Ante of the Wall Street Journal raises a warning in his article: “Banks Rush to Fix Security Flaws in Wireless Apps“. Here is the lead paragraph:

“A number of top financial companies and banks such as Wells Fargo & Co., Bank of America Corp., and USAA are rushing out updates to fix security flaws in wireless banking applications that could allow a computer criminal to obtain sensitive data like usernames, passwords, and financial information.”

The same article mentioned viaForensics, a company specializing in securing mobile applications, as the firm responsible for discovering the vulnerabilities. Good for them. My question is, why is this even happening? It is not complicated. Our banking credentials should be considered sacred, period.

On a good note, viaForensic’s Web site mentions their researchers are working with the affected financial institutions: “Since Monday (11/01/2010), we have been communicating and coordinating with the financial institutions to eliminate the flaws.”

The blog post goes on to say: “Since that time, several of the institutions have released new versions and we will post updated findings shortly.”

In the quote, viaForensics mentioned publishing new test results. That refers to their online service called appWatchdog.

Within days, and to their credit, most of the banking firms pushed out updates to remove the vulnerabilities. The following appWatchdog slide displays the results from testing Wells Fargo’s app for Android phones on November 3, 2010:

Three days later, the same Android app from Wells Fargo passed every test:

Why worry then?
It appears mobile banking applications are getting fixed. It also was pointed out that viaForensics found vulnerabilities, not actual attacks. So there is nothing to worry about. Not quite, I talked to experts that disagree.

One researcher in particular voiced the following concerns:

  • Most mobile devices are so new, security apps are not available.
  • Keeping member’s banking information secure should be a no-brainer, yet it is not so.
  • PCs are still a target-rich environment, so criminals are not yet focused on creating mobile phone malware.

The researcher’s first two concerns rang true. The third concern intrigued me, meaning I need to learn more about that. I came across this article, quoting Sean Sullivan of F-Secure. So far in 2010, F-Secure detected 67 strains of smartphone malware compared to thousands aimed at PCs.

The difference is insignificant, but Sullivan also mentioned this year’s total was nearly double last year’s. So, stay tuned.

What’s the answer?
For right now, if banking online is a must, using a dedicated PC, LiveCD, or a bootable flash drive are still the best solutions.

Final thoughts
Not sure what it all means–is it FUD or are we making the same mistakes we do banking online with PCs? What do you think?

Update (Nov. 29, 2010):
Andrew Hoog, chief investigative officer for viaForensics, contacted me. They tested five new mobile applications: Groupon, Kik Messenger, Facebook, Dropbox, and Mint.com. All the applications failed to securely store username and application data. More troubling, four applications: Groupon (Android), Kik Messenger (Android), Kik Messenger (iPhone), and Mint.com (Android) were storing passwords as plain text.

Michael Kassner has been involved with IT for over 30 years and is currently a systems administrator for an international corporation and security consultant with MKassner Net.

Most important updates in Red Hat Enterprise Linux 6

On Nov. 10, Red Hat unveiled the latest version of Red Hat Enterprise Linux (RHEL): version 6. Version 5 was released in March 2007, so it has been a long road to produce the latest version.

Due to the length of time between releases, RHEL6 is a system that is quite unlike RHEL5. Obviously it comes with newer versions of software across the board, something welcomed among those that find RHEL5 a little long in the tooth. Keeping in mind that “bleeding edge” doesn’t necessarily belong in an enterprise platform, it is nice to have more recent software along with the inevitable feature enhancements.

Cloud computing
One of the big focuses on RHEL6 is cloud computing. This involves a number of factors, and a lot of work has gone into it to not only make it viable, but highly competitive with other offerings.

Performance enhancements abound, making it very efficient and scalable not only for current hardware, but also hardware yet to come For example, systems with 64TB of physical memory and 4096 cores/threads are not typically in use today, but RHEL6 will support it, out of the box, when they are.

While performance is definitely one area of cloud computing, another area is virtualization, and this is where KVM becomes a direct competitor to other virtualization solutions from vendors such as VMware. Using KVM and libvirt, RHEL6 provides a great virtualization management infrastructure with a really powerful virtualization solution–all baked right into the operating system (OS) for no extra cost.

Security
That said, the thing I am most passionate about is security. Perhaps it’s an odd thing to be so interested in, but it’s both a hobby and a profession for me, so the security features in RHEL6 are really important. And they will be important to anyone with a public or private cloud because heavy virtualization and cloud computing make proactive security ever more important.

While RHEL has provided SELinux for a long time, RHEL6 provides further SELinux support and policies, making it easier to use now than in previous versions of RHEL. But SELinux is just one piece of the puzzle, and it’s a complex one at that. While great strides have been made to make it easier, many people still opt to turn it off rather than figure out how to make it do what they want. So this is where other security enhancements come into play.

While RPM packages have always been signed, RHEL6 now uses the SHA-256 algorithm and a 4096-bit RSA signing key to sign packages. This provides users with greater confidence that packages are legitimate and authentic, compared to the weaker MD5 and SHA-1 algorithms that were used in previous versions.

Other security features that have either been written in previous versions and improved upon or are new to RHEL6 include  various binary proactive protection mechanisms. This includes using GCC’s FORTIFY_SOURCE extensions, this time including coverage for programs written in C++. It also includes glibc pointer encryption, SELinux Executable Memory Protection, all programs compiled with SSP (Stack Smashing Protection), ELF binary data hardening, support for Position Independent Executables (PIE), and glibc heap/memory checks by default.

In the kernel are protections like NX (No-Execute) protection by default, restricted access to kernel memory, and Address Randomization (ASLR). The kernel also features support for preventing module loading, GCC stack protection, and write-protected kernel read-only data structures.

With all these features, it is clear to see that proactive security has been taken seriously in RHEL6, and that a lot of work has gone into making RHEL a secure OS suitable for any environment you throw at it: virtual, physical, or cloud. When you include new application features as a result of newer versions of software, the thousands of bugs fixed, the standard 7-year support lifecycle (with an optional extension to 10 years)–all of this makes RHEL6 highly suited to enterprise deployment.

Yes, I am biased towards Red Hat as I am a company employee, but I’m also confident in what RHEL6 brings to the table and willingly stand behind it.

Vincent Danen works on the Red Hat Security Response Team and lives in Canada. He has been writing about and developing on Linux for over 10 years.

Advertisements