You can still access your data without having to organize and keep track of inconsistent drive letters.
The immense growth of data stores and the costs of storing and securing that data are a major concern for IT departments. Data is often stored in an arbitrary fashion based on practices dating back to the 1980s, and accessed with mapped drives—which are just as antique.
Imagine a company with locations in New York and San Francisco that each used to operate as independent companies. Each has its own IT infrastructure. Some of the most visible differences are the use of different drive-mapping techniques and in how data is organized and stored.
This company has decided both locations need to start operating as single entities, but the different drive mapping will cause quite a challenge for sharing data among the locations. To make the situation more challenging, both locations use different drive mapping for applications. Managers of newly created or merged departments are having trouble finding the information they need. How can these locations efficiently work together?
The exponential growth of data and its unstructured storage is the basic problem here, and this isn’t just an IT problem. The longer people regard this as an IT problem, the longer it will take to resolve. The only solution is to consider it as an overall business challenge.
Once business management sees the benefits of restructuring and cleaning up data storage, the implications can be enormous in terms of efficiency. For the IT department, this type of project can be a major chance to present itself as a real internal business partner instead of merely being “the guys who show up and fix things.”
The goal is to have the business working in a structured manner where each department has one appointed “owner” responsible for the way data is stored, shared and accessed. This departmental owner, along with his colleagues, will design a new structure for data management.
A large percentage of stored data is outdated and almost never accessed anymore. Recent studies indicate that only 7 percent of stored data is actually relevant to the user. First, store this data on an archival folder. Then delete or archive this outdated data on less-expensive storage media. Move the data from the archival folder to an actual data archive on a regular basis, something like twice a year.
The departmental content owner must be responsible for deleting unnecessary files. This is a challenge because most people are reluctant to delete anything. The most important part of the project is restructuring the remaining data so everyone knows where to find the information they need.
Every department should have the following folders: Management, Common, Public and Archive. The Management folder hosts information for departmental managers and access is restricted to those persons. The Common folder should be accessible for all people within the department, and the Public folder by everyone within the organization. Use the Archive folder as a dumpster for infrequently used information and occasionally move this folder’s contents to a true archive based on cheaper storage.
The departmental content owner is also responsible for maintaining permission levels for the department folders. A good way to set this up is permissions based on roles (job descriptions) within the organization.
First, the departmental content owner and management have to document the schema, including the new folder and permission structure. Then IT can take that overview and create security groups based on the defined roles, add the appropriate persons to those groups and set up the permissions per group as stated in the documentation provided by the departmental owner. The owner and the owner alone must communicate any mutations within the department to IT.
Although this whole process is not that lengthy to explain, it will most likely be a lengthy one to implement. It’s essential to get full buy-in from all parties. Once the structure is in place it will have many advantages, from a logically defined structure in storage and permissions to a reduction of the amount of data stored on expensive storage.
When letter-mapped network drives first came into prominence, they were easy to implement and uncomplicated to maintain. We’re still using this drive-mapping outline, however. The amount of data has exploded and company structures are changing to require easy sharing of data between locations. Technology advances almost at light speed, and it’s an everyday challenge to keep up-to-date with everything new. We still cling to a more-than-30-year-old technique for accessing data.
Many locations have different drive letters for different types of data, which makes it almost impossible to share information in a logical and efficient manner. One of the side effects is users copying data to the other locations, thus increasing the already enormous amount of data with duplicated files.
When you consider an organization such as the one I described, with two locations in New York and San Francisco, it will be painfully clear that sharing information between those two locations will be a major challenge. The file storage structure of both locations differs significantly because they were used to operating as more or less independent entities. They’ll have to take some stringent actions to enable efficient file sharing.
The first task is to create a Distributed File System (DFS) folder structure. DFS acts like a blanket over the file structure and is completely transparent for the user. It’s easy to change the server where the files are stored without interrupting data access. DFS also has an intelligent method of having multiple replicated folders in more than one location. A DFS can efficiently replicate changes across those folders. When there are changes to a file, it will only transmit the changes instead of the entire file, saving bandwidth and time.
The next step is to determine the required folder structure. The root folders (or namespaces) in our example will be Applications, Archive and Departments. Of course, in live scenarios, the folder list will be more extensive. For one reason or another, the DFS management tool uses the terms folders and namespaces for the same thing. In the left pane, it calls them folders and in the right pane, namespaces. It creates other namespaces and folder targets underneath each namespace or folder. Folder targets point to the actual physical location on file servers.
Restrict folder creation to directly on the file servers at this level. This means you have to create all first-level folders in DFS and link them to appropriate folders on the file servers.This makes it impossible for users to “pollute” the structure with all kinds of folders.
For example, IT creates the first-level folders Accounting, Archive and Business Control, and no user can create extra folders. Whenever you need a new first-level folder, the owner must include this in his overall schema of the departmental folders and communicate this to IT.
Once the DFS structure is set, make it available to the users in an easy-to-use manner. Now I’ll do something completely different. In Windows Vista and Windows 7, you can create network locations listed underneath the physical drives in Windows Explorer. To create the network locations for everyone, a Windows PowerShell script (found on a TechNet forum) is used, which will run as a logon script, addnetworkloc.ps1 (see Figure 1).
Figure 1 This script will help create network locations that all users can access.
# Get the basepath for network locations
$shellApplication = New-Object -ComObject Shell.Application
$nethoodPath = $shellApplication.Namespace(0x13).Self.Path
# Only create if the local path doesn't already exist & remote path exists
if ((Test-Path $nethoodPath) -and !(Test-Path "$nethoodPath\$name") -and (Test-Path $targetPath))
# Create the folder
$newLinkFolder = New-Item -Name $name -Path $nethoodPath -type directory
# Create the ini file
$desktopIniContent = @"
$desktopIniContent | Out-File -FilePath "$nethoodPath\$name\Desktop.ini"
# Create the shortcut file|
$shortcut = (New-Object –ComObject| WScript.Shell).Createshortcut("$nethoodPath\$name\target.lnk")
$shortcut.TargetPath = $targetPath
$shortcut.IconLocation = "%SystemRoot%\system32\SHELL32.DLL, 85"
$shortcut.Description = $targetPath
$shortcut.WorkingDirectory = $targetPat
# Set attributes on the files & folders
Set-ItemProperty "$nethoodPath\$name\Desktop.ini" -Name Attributes -Value ([IO.FileAttributes]::System -bxor [IO.FileAttributes]::Hidden)
Set-ItemProperty "$nethoodPath\$name" -Name Attributes -Value ([IO.FileAttributes]::ReadOnly)
This script is run with two arguments, the first one being the desired name of the network location and the second one being the physical locations in UNC format:
addneworkloc.ps1 Departments \\contoso.com\departments
To run this as a logon script, you need to create two Group Policy Objects (GPOs) in Active Directory. The first one lets you use Windows PowerShell for logon scripts. By default, Windows PowerShell scripts are limited because security settings built into Windows PowerShell include something called the “execution policy.” The execution policy determines how Windows PowerShell scripts run.
The default execution policy is set to “Restricted.” This means that scripts—including those you write yourself—will not run. You can fix this manually or by GPO. In the Group Policy Management Console, create a new GPO and call it something like “GlobalC-Enable PowerShell.”
In this GPO, go to Computer Configuration | Policies | Administrative Templates | Windows Components | Windows PowerShell. Set Turn on Script execution to enabled. (GlobalC is a naming convention where “Global” states use of the GPO for more than one organizational unit and “C” means the GPO contains computer configurations.)
The second GPO is “GlobalU-Set Network Locations.” Use this for the Windows PowerShell logon script, configured at User Configuration | Policies | Windows Settings | Scripts | Logon. Once this GPO is active, all users within the scope of the GPO will get the new network locations added. During the migration period, these can exist alongside the old drive mappings.
Working with network locations instead of mapped drives works perfectly for accessing data, but it can cause some issues with applications needing to access network resources. Most modern applications now support either network locations or UNC paths. In that case, you only need to adjust shortcuts and configuration files. There will always be applications that will have trouble working without drive letters.
So how would you resolve this issue? There are several possible methods. The easiest—and the ugliest—is creating a CMD file that maps the drive and runs the application. When the application ends, it deletes the mapped drive. The ugly part of this is that as long as the program runs, you’ll see a black CMD box.
Another more visually appealing solution is to create an AutoIt script that you can compile into an executable. AutoIt is a freeware scripting tool that can create nice solutions. Figure 2 shows an example of a script that maps a drive, runs an application and, when the application is closed, deletes the drive mapping.
Figure 2 A script created with AutoIt that maps a drive, runs an application and deletes the drive mapping.
If $CmdLine>2 Then
if $CmdLine>3 Then
for $i=4 to $CmdLine
DriveMapAdd ($driveletter, $mappedpath,0)
ShellExecuteWait ($runprogram, $args)
Save this script and compile it to something like runmapped.exe. Run the old application that needs a mapped drive like this (note that all arguments are between quotes):
Runmapped.exe "q:" "\\fs\applications\apps" "c:\program files”\program\program.exe" "possible argument 1" "possible argument 2" ..
Getting rid of drive mapping isn’t that difficult, from a technical point of view. The biggest challenge will be raising awareness within your company that you need to restructure your data storage.
Using network locations instead of drive letters is just a small tool to help your colleagues access their data in a more logical manner. If you achieve this, your IT department will get a jump-start in truly being a partner to the business. Soon, you, too, will be able to say, “Who needs mapped drives anyway?”
Not a TechNet Subscriber?
Confidently evaluate Microsoft software and plan deployments with a Microsoft TechNet Subscription.