Sunday, March 13, 2011

Windows Phone 7 - Hello World!

Ever since Nokia announced its partnership with Microsoft and made Windows Phone 7 its primary smartphone operating system, I anxiously wanted to give it a spin. So in this post, I am going to start by setting up a development environment for Windows Phone 7 followed by a simple Hello World example.

Windows Phone 7 is primarily a successor of the Windows Mobile Platform (the last one in the line being Windows Mobile 6.5). It's a major revamp in Microsoft's Mobile strategy in terms of its application development model.

Windows Phone 7 no longer supports unmanaged code (Win32 and C++ are gone). It is purely built on Managed Code covering Silverlight, XNA and the .NET Framework (Programming Languages include C# and VB.Net). Here's a nice article on Windows Phone 7 application development model - http://blogs.msdn.com/b/abhinaba/archive/2010/03/13/windows-phone-7-series-programming-model.aspx.

To get started with Windows Phone 7 development, download the Windows Phone Developer Tools RTW (Release To Web). It includes the following tools -

  1. Visual Studio 2010 Express for Windows Phone
  2. Windows Phone Emulator Resources
  3. Silverlight 4 Tools for Visual Studio
  4. XNA Game Studio 4.0
  5. Microsoft Expression Blend for Windows Phone

If you are already using a higher version of Visual Studio or Expression Studio, this toolkit will install extensions to the existing IDEs. The release is also available as an .iso image. Though the Windows Phone 7 toolkit comes equipped with Expression Blend, for most of the basic applications the design mode of Visual Studio should be sufficient.

To get a hands-on experience with Windows Phone 7, let's take a simple application which accepts a username and greets him/her in the next page. The best part of Windows Phone 7 development is that it is pretty much in the lines of other Visual Studio project types we are familiar with like the Windows Forms Applications, WPF Applications and ASP.Net Applications where in the UI can be created easily using the designer and the business logic is maintained in a code beside event driven model in another .cs or .vb file depending on the programming language.

As already mentioned, Windows Phone 7 is built on Silverlight and thereby every screen has a corresponding XAML (eXtensible Markup Language) page, each having its own code file. Movement between these XAML pages is made possible through an instance of a NavigationService class for every page.

There are three important methods provided by NavigationService - Navigate, GoForward and GoBack. Navigate takes a Uri instance specifying the URI location and loads the XAML page specified.

NavigationService.Navigate(new Uri("/NamePage.xaml", UriKind.Relative));

GoBack and GoForward loads the previous and next pages respectively.

NavigationService.GoBack();
NavigationService..GoForward();

Parameters can be passed across XAML pages by appending them to the URL and retrieving them using the QueryString property of the NavigationContext object. This concept of Windows Phone 7 was very surprising. Since Mobile Applications are very similar to Windows Applications, passing parameters through URLs like Web Applications looked a bit off-track.

NavigationService.Navigate(new Uri("/NamePage.xaml?name=" + name, UriKind.Relative));
string name = NavigationContext.QueryString["name"];

Download the example code here.

Sunday, January 30, 2011

Introducing ADO.NET Entity Framework

This is the third and final part of my ORM series in which I am going to introduce the ADO.NET Entity Framework, an in-built Object Relational Mapping model of the .NET Framework.

Similar to the previous post, this one also covers the same four principles -

  • Configuring a .NET project for the ADO.NET Entity Framework
  • Inserting data from Objects directly
  • Retrieving data using Object Lists an LINQ
  • Changing the Database Management System

Though the screen-shots and the example codes emphasize on C#, the principles are same for all the .NET languages.

Requirements to run the example in this article - Visual C# Express, SQL Server Express and MySQL. Download Visual C# Express here, SQL Server Express here and MySQL here.

Configuring a .NET project for the ADO.NET Entity Framework

To configure a .NET project to work with the ADO.NET Entity Framework, an Entity Data Model is added to the project. The Entity Data Model is primarily a schematic representation of the database tables stored as a XML file. Each of these tables is converted into a class and foreign key relationships between these tables are maintained as Lists inside the objects.

Visual Studio provides a Wizard to create Entity Data Models. Right click on the project and select New Item from the Add menu. Choose the ADO.NET Entity Data Model template from the chooser. There are two approaches available to build the Entity Data Model - Database First Approach and the Model First Approach.

In the Database First Approach, the Entity Data Model is generated from existing table structures. The wizard allows the developer to setup a connection by providing the Database Server name and the Database name. The wizard also allows developers to choose tables, views and stored procedure that are to be a part of the Entity Data Model.

In the Model First Approach, the Entity Data Model is created from scratch and the database tables are generated based on this model.

Once the wizard completes, a designer opens up which shows a schematic representation of the generated .edmx file. This file contains the table mappings as a XML along with a code behind file with a .Designder.cs extension for the auto-generated classes corresponding to the tables. However a drawback with the Entity Data Model is that it combines all the classes into a single file which makes manual maintainence a little difficult.

I used the Database First approach to create the sample application to insert and retrieve data. You can get the MySQL and SQL Server scripts along with the Visual Studio solution here.

Inserting data from Objects directly

The Entity Model generates a class which inherits from ObjectContext. This class acts as a data manager to connect to the Database Management System to retrieve, insert, update and delete records.

ADO.NET Entity Framework maintains the records of the table as a List of objects. To insert new records into the table, create new objects, add them to the appropriate lists and then save the changes using a ObjectContext instance. The following piece of code shows how objects can be persisted -

using (Context context = new Context())
{
// Create the object
context.Objects.Add(object);
context.SaveChanges();
}

ADO.NET Entity Framework does a wonderful job while storing objects with foreign key dependencies. These dependencies are maintained using lists as objects and while storing these records, the appropriate identity keys are inserted into the child tables. However this feature is limited to a few DBMS like SQL Server.

Retrieving data using LINQ and Object Lists

The ADO.NET Entity Framework retrieves data from the back-end tables in the form of object lists. So accessing records is as simple as iterating through these lists.

using (Context context = new Context())
{
foreach (Object object in context.Objects)
// use the object appropriately
}

Since data retrieval is in the form of lists, developers can piggy-back on an other .NET framework feature - LINQ (Language Integrated Query). LINQ makes conditional querying of data a lot easier.

using (Context context = new Context())
{
var objects = from object in context.Objects where "condition" select object;
// use the objects
}

Changing the Database Management System

Before changing the DBMS of the Entity Model, it is important to understand how the ADO.NET Entity Framework stores the connection strings and the mapping between classes and the back-end tables. The connection string is stored in the App.Config file of the project and the table mappings are stored as a XML in the form of the .edmx file as mentioned earlier.

Unfortunately the ADO.NET Entity Framework varies it's implementation with the DBMS. Because of this modifying the XML manually isn't easy. For Example, ADO.NET Entity Framework does not support foreign key constraints in the form of lists for DBMS like MySQL.

The sample application contains another Entity Model which connects to a MySQL Server containing similar tables. To use MySQL with the ADO.NET Entity Framework, an connector is needed. The MySQL Connector/NET is available here.

Wednesday, January 05, 2011

Introducing Hibernate In Java Using NetBeans

In one of my recent posts, I introduced the theoretical topic of Object-Relational Mapping (ORM) - http://gautam-m.blogspot.com/2010/11/object-relation-mapping.html. In this post I am going to take a step forward and introduce Hibernate - an open source Java persistence framework from JBoss.

This post covers four basic principles of Hibernate -

  • Configuring a Java project for Hibernate
  • Inserting data using Object Persistence
  • Retrieving data using Hibernate Query Language (HQL)
  • Changing the database configuration to connect to another DBMS

Though the post and screen-shots emphasize on NetBeans, the concept is the same for all IDEs. Hibernate configuration files can definitely be written without an IDE but make sure all the required class libraries are properly referenced.

Requirements to run the example in the article - NetBeans, MySQL, JavaDB, Java and Hibernate. Java can be downloaded here, installing the All NetBeans package will cover JavaDB and Hibernate. and MySQL can be downloaded here.

Configuring a Java project for Hibernate

The crux of Hibernate is the creation and usage of configuration files. There are three types of configuration files which are to be setup for Hibernate -

  1. The .cfg.xml file - this is the main configuration file which contains information about the database like the database URL, the driver, the username and password, etc. Hibernate can optimize it's behavior depending on the DBMS being used. To facilitate this, a property called Dialect is specified. However this is an optional property as Hibernate can deduce this depending on the JDBC metadata returned by the driver
  2. The .reveng.xml file - this file holds the data corresponding to the schemas and tables being utilized by Hibernate in the application
  3. The .hbm.xml - these files maps POJOs (Plain Old Java Objects) to the table schemas of the database

Typically one .cfg.xml and one .reveng.xml exist for a project and one .hbm.xml exists for each table (mapped to a class). The .hbm.xml maps the object properties to the table columns. It is possible to add new properties to the class which have no effect on the backend tables.

To create these files in a NetBeans project, select New File and select the following File Types from the Hibernate category -

  1. Hibernate Configuration Wizard
  2. Hibernate Reverse Engineering Wizard
  3. Hibernate Mapping Files and POJOs from Database

Follow the wizards to complete the configuration setup. I used MySQL and JavaDB as my DBMS to create a sample application to insert and retrieve data. You can get the MySQL and JavaDB scripts along with the NetBeans project here.

Inserting data using Object Persistence

Inserting data is a cake-walk in Hibernate. All that is there to do is to create the object and store the object data in the database tables using the save method of a SessionFactory object. The following piece of code persists the object data -

SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory();
Session session = sessionFactory.openSession();

// Create the object
session.save(object);

Retrieving data using Hibernate Query Language (HQL)

Retrieving data is done through a query language designed for Hibernate called the Hibernate Query Language. HQL is a Object-Oriented Query Language and is very much similar to the traditional SQL we use. The beauty of HQL is that the result of the query is returned as a list of objects rather than as a ResultSet. These objects can be used directly in the code without any overheads. HQL is very wide topic, so I am going to skip the details here but there are several tutorials available for HQL on the Internet. The createQuery method of the above Session object is used along with the list method of the Query object to get the objects -

Query query = session.createQuery(queryString);
for (Object object : query.list()) {
// cast and use the object appropriately
}

Changing the database configuration to connect to another DBMS

The best feature of Hibernate according to me is it's ability to change a DBMS without any change to the application code. To change the DBMS, open the Hibernate Configuration File (typically hibernate.cfg.xml) and change the dialect, driver class, connection URL, username and password to the values corresponding to the new DBMS.

These changes can be done either through the design view or directly on the XML.

Wednesday, November 17, 2010

Capital IQ - Moving To McGraw Hill Financials

In a move to foster innovation and drive growth, The McGraw Hill Companies (NYSE: MHP) announced several Organizational and Management changes a few days back (November 15th 2010).

Though there are a few Management changes, I will be mostly stressing on the Organizational changes here. From a bird's-eye view, the current financial services of McGraw-Hill will be realigned into two segments - Standard & Poor's and McGraw-Hill Financials.

Currently, McGraw-Hill Companies is divided into three segments -

  1. Financial Services which includes Standard & Poor's Credit Market Services and S&P Investment Services (which includes Capital IQ)
  2. McGraw-Hill Education
  3. Information & Media

Beginning January 1st, 2011, McGraw-Hill Companies' reporting segments will be -

  1. Standard & Poor's - the leading credit rating company
  2. McGraw-Hill Financials - combining Capital IQ (including ClariFI, Compustat, etc.), S&P Indices, Valuation & Risk Strategies and Equity Research Services
  3. McGraw-Hill Education - the world's premier education services company
  4. McGraw-Hill Information & Media - a global business information company

The words of Harold McGraw Hill III, President and CEO of the McGraw Hill Companies on the strategic decision -

"This change will enhance our ability to deliver a broad and deep suite of products for investors across asset classes around the world, positioning us to capitalize on the growth trends we see in the global financial markets."

Check out the official announcement here. Coming to the impact of this decision on the Capital IQ brand - I guess our tagline "A Standard & Poor's Business" might change accordingly. However, there hasn't been any official confirmation on this. I will post in an update if we have anything.

Monday, November 15, 2010

RockMelt - Will Melt Your Heart

In my previous post I said I would be writing about Hibernate in this article, however I made an exception to write about a new browser I just started using - RockMelt. RockMelt is a social media web browser developed by Tim Howes and Eric Vishria and backed by Netscape founder Marc Andreessen.

It wouldn't be an overstatement if I say that it will melt your heart, especially if you're into social networking and particular a regular Facebook user like me. Here's how RockMelt looks like -

By this time, I guess most of you must have realized that it looks pretty much like Google Chrome. In fact it is more-or-less Chrome itself. Developers who have been interested in Google Chrome would have heard about Google's Chromium project - the open source project from which Chrome was born. Chromium was the parent of browsers like Nickel and now RockMelt.

However RockMelt was built to target a specific crowd - the Social Networking generation, especially users of Facebook and Twitter. Let me highlight a few features of RockMelt which I found interesting -

Friends Strip

RockMelt integrated Facebook's chat application directly into the browser as a strip running in the left side. The chat UI is pretty impressive; in fact it is as good as Google Talk. The best part is that each chat window now runs as a desktop app independently of the browser allowing users to use chat windows individually like Google Talk.

Status Updates

Updating your status in Facebook is just a click away from any site. With RockMelt, you need not actually go to Facebook to change your status or share a link. The browser itself allows you to do it.

RSS Feeds

The best thing I liked about RockMelt was its integration of RSS/ATOM feeds with the browser itself. Though many browsers manage feeds, nothing can beat this. The integration of the feeds as a strip to the right is really awesome. By default, feeds of Twitter and Facebook are built into the browser. Feeds from other sites like Gmail and Blogs can be added easily.

For people who use browsers only for surfing the web and reading articles, these changes won't be much and the regular Chrome browser should be enough. But people who are into Networking, Blogging, etc. will find it exciting and might enjoy it.

RockMelt was released a few days back and is presently “By Invitation” only and if you are interested, you should register yourselves here for an early access - http://www.rockmelt.com/. Lucky I got an invitation from one of my friends :). So if you want to try it, either register at the mentioned site or catch hold of a friend who already has an invitation :D.

Know more about RockMelt straight from the horse's mouth -

I will back on Hibernate and ADO.NET Entity Framework in my next two posts. Until then Happy Facebooking and Happy Twittering :)

Friday, November 12, 2010

Object-Relational Mapping

Over the years we have seen a paradigm shift in Programming Languages from the traditional Procedural programming approach to an Object-Oriented approach. However Databases have changed very little in terms of their fundamental principle - Set Theory. Databases are and have been Relational almost from their advent. Most of the Database Management Systems which we use like MySQL, Microsoft SQL Server and Oracle are Relational to a very large extent. Though there have been approaches like Object-Relational DBMS and Hierarchical DBMS, they have rarely been adopted in production environments.

This article focuses on creating a relationship between Object-Oriented Programming Languages and Relational Databases through a concept called Object-Relation Mapping. This article will be followed by two related articles - Using Hibernate in Java with NetBeans and Using ADO.NET Entity Framework in .NET (C#) with Visual Studio.

Before jumping into ORM, here are two common concepts which most of you must be familiar with -

Class - A class is a construct that is used as a blueprint (or template) to create objects of that class. A class defines the properties that each and every object possesses.

Table Schema - A table schema primarily defines the fields and relationships of a table. A table contains records which have the same structure.

Observing closely, we can notice that these two definitions are pretty similar. Both of them talk about a template which multiple instances follow (objects in OO and records in DBMS). Both of them talk about fields (properties) these instances possess.

This similarity is the basis of ORM. Table Schemas of Relational Databases correspond to Classes in an Object Oriented Programming Language and Records of these tables correspond to instances of the Classes (Objects).

Consider an example of a Student table in a Database created with the following schema -

CREATE TABLE Student_tbl
(
StudentId INT PRIMARY KEY,
Name VARCHAR(MAX),
Age INT
)

This table can be translated into a Class with the following structure -

class Student
{
Int32 StudentId;
String Name;
Int32 Age;
}

Each record of the Student_tbl will be an object of the Student class.

There are several free and commercial packages for Object Oriented languages that perform Object Relational Mapping. Most of these packages incorporate advanced features like -

  • Automating the class generation process from the Database Schemas
  • Maintaining foreign key dependencies using Lists
  • Generating identity keys while inserting records and using these keys in subsequent insertions as foreign keys if required
  • Creating methods for retrieving data and saving data directly as objects

The major advantages of ORM lie in -

  • Minimal database dependency - most of the ORM packages use a concept called 'Dialect' to identify the DBMS the application is connecting to. So changing the dialect when the DBMS is changed is sufficient for the application to run. No application code has to be changed
  • ORM reduces the amount of code that needs to be written by a developer

However it is often argued that ORM packages don't perform efficiently during bulk deletions and with joins. So generally it is recommended to check if there a hit in the efficiency of the application when ORM tools are introduced, especially when complex operations are involved.

Though ORM is a simple concept, it's a rapidly over-shadowing the traditional database connectivity models in Object Oriented Programming Languages like Java and C#. In my next post, I will be introducing Hibernate - an ORM package for Java and in the subsequent post I will introduce the ADO.NET Entity Framework - an ORM package for .NET.

Sunday, September 26, 2010

Standard Compression Scheme For Unicode (SCSU) - Java & .NET Implementations

In the previous article, I introduced the compression techniques available in SQL Server and highlighted the Unicode Data Compression feature of SQL Server 2008 R2. This post will cover the algorithm used by SQL Server for Unicode compression.

The Java and .NET (C#) implementations of the algorithm have been attached to the post. They have been built as Class Libraries to support reusability.

Standard Compression Scheme for Unicode

As evident from the post title, the algorithm used in Unicode Data Compression is the Standard Compression Scheme for Unicode (SCSU). SCSU is a technical standard for reducing the number of bytes needed to represent Unicode text. It encodes a sequence of Unicode characters as a compressed stream of bytes. It is independent of the character encoding scheme of Unicode and can be used for UTF-8, UTF-16 and UTF-32.

I am not going to explain the entire SCSU algorithm here but you can get its specifications from the Unicode Consortium. The SCSU algorithm processes input text in terms of their Unicode code points. A very important aspect of SCSU is that if the compressed data consists of the same sequence of bytes, it represents the same sequence of characters. However, the reserve isn’t true; there are multiple ways of compressing a character sequence.

Applications/Organizations using SCSU -

  • Symbian OS uses SCSU to serialize strings
  • The first draft of SCSU was released by Reuters, a news service and former financial market data provider. Reuters is believed to use SCSU internally
  • As already mentioned, SQL Server 2008 R2 uses SCSU to compress Unicode text

To be honest, SCSU has not been a major success. There are very few applications which need to compress Unicode Data using a special compression scheme. In certain situations, especially when the text contains characters from multiple character sets, the compressed text can end up being larger in size than the uncompressed one.

SQL Server and SCSU

SQL Server stores data in the compressed format only if it occupies lesser space than the original data. Moreover there must be at least three consecutive characters from the same code page for the algorithm to be triggered.

A major issue with the implementation is determining whether the stored text is in compressed or uncompressed format. To resolve this issue, SQL Server makes sure that the compressed data contains an odd number of bytes and adds special case characters whenever required.

The SQL Server implementation details are from a blog post by Peter Scharlock, a SQL Server Senior Program Manager.

SCSU Implementation

Though the specifications of SCSU are pretty compressive and a sample Java implementation is available at the Unicode Consortium, the implementation isn’t reusable as it is built as a Console App.

Using the specification and the sample Java implementation, I built a similar implementation as a Class Library to encourage reuse. The implementation is available in two languages - Java and C#.

The Java implementation is made available as a NetBeans project and the .NET implementation is made available as a Visual Studio solution. The implementations come along with a sample Front End which uses the corresponding Class Library.

If you are planning to modify the source code of the implementations, please keep these points in mind -

  • The .NET implementation differs from the Java implementation on a basic fact that the byte in Java is signed and the byte in .NET is unsigned. sbyte is available in .NET but using byte is more comfortable
  • Both the implementations have been tested for UTF-16 Little Endian encoding schemes. Since the default behavior of a Unicode character in Java is Big Endian, a few tweaks have been implemented

Example

To check the integrity of the SCSU implementations, a sample text file contains text from German, Russian and Japanese (the same text available at the SCSU specifications site) is taken and verified if the compressed text is as expected. The size of the file was compressed from 274 bytes to about 199 bytes.

The source code of the Java and .NET implementations can be found here. If you find any bugs, please let me know and I will make the necessary modifications.

Friday, September 17, 2010

Unicode Data Compression In SQL Server 2008 R2

SQL Server 2008 R2 was released a few months back and one of the features I found interesting was its ability to compress Unicode data. In this post, I will be introducing the various compression options available in SQL Server and towards the end I will emphasize a sample analysis used to estimate the efficiency of Unicode Data Compression and the Compression-Ratio improvements of SQL Server 2008 R2 over SQL Server 2008.

In my next post, I will emphasize on the actual algorithm used by SQL Server to achieve this compression and will provide the Java and .NET implementations of the algorithm.

Compression Techniques in SQL Server

In computer science, compression is the process of encoding information in fewer bits than an un-encoded representation would use. The compression techniques available in SQL Server can be broadly categorized into two types depending on the way they are architected - Data Compression and Backup Compression.

Data compression occurs at runtime, the data is stored in a compressed form to reduce the disk-space occupied by a database. On the other hand, backup compression occurs only at the time of a backup and uses a proprietary compression technique. Backup compression can be used on a database that has already undergone data compression, but the savings might not be significant.

Data compression is again of two types - Row level Data Compression and Page level Data Compression. Row level compression primarily turns fixed-length data-types into variable data-types, thereby saving space. It also ignores zero and null values saving additional space. Because of this, more number of rows can be accommodated in a single data page. Page level compression initially performs Row Level compression and adds two additional compression features - Prefix and Dictionary Compression. As evident, page level compression offers better space saving than row level compression.

Though compression can provide significant space saving, it can also cause severe performance issues if misused. For further reading on compression, refer "An Introduction to Data Compression in SQL Server 2008".

As the name suggests, Unicode Data Compression comes under Data Compression, and to be more specific it's a part of Row-level Compression.

Sample Analysis

Microsoft had promising stats on the Unicode Data Compression in SQL Server 2008 R2, going up to a 50% space savings on a few character sets like Hindi, German, etc. So I decided to give it a try myself.

Being from India, I decided to test the compression ratios for Hindi text. I created a randomizer in C# (.NET) to generate random text from a few Hindi phrases obtained from Linguanaut. The program generates 1.5 million random Hindi strings and writes them into a temporary file which is Bulk Inserted into a table.

To check the improvement of SQL Server 2008 R2 over SQL Server 2008 in terms of Data Compression, two separate instances of SQL Server were established on the same system configuration (Intel Core 2 Quad and 4 GB RAM). Both the instances had the same schemas for the databases and the tables. The Randomizer and the Schemas + Bulk Insert scripts are attached below.

A major drawback of Unicode Data Compression in SQL Server 2008 R2 was that it couldn't be applied on columns of the data type NTEXT and NVARCHAR(MAX) and to highlight this we used two different tables, one using NTEXT and another using NVARCHAR(250).

Here is a quick reference table of the compression-ratios obtained in the analysis -

SQL Server 2008
SQL Server 2008 R2
NTEXT
98.24%
98.25%
NVARCHAR(250)
95.68%
57.78%

From the above table, we can observe the compression-ratio for Unicode Data in SQL Server 2008 R2 is around 57% (nearly the space saving mentioned by Microsoft). However in all the other cases, we can observe that the saving savings is almost negligible. For space savings of other character sets refer "Unicode Compression (MSDN)".

Get the Visual Studio Solution of the Randomizer and the Database Scripts here.

SQL Server 2008 R2 Screen Shots

SQL Server 2008 Screen Shots

Sunday, August 22, 2010

Creating A Bootable Windows 7 USB Flash Drive

In this post, I am going to emphasize on how to create a bootable USB drive with the Windows 7 installer. It's a pretty simple and straight-forward process. There are several advantages of doing this -

  • Installing Windows 7 on computers without an Optical CD/DVD Drive. Many of the new laptops/desktops come without the CD/DVD drive. Using a bootable USB is very helpful here
  • Installing from the USB drive is typically faster than installing from a DVD drive
  • OS disc images (.iso, .nrg) need not be burned onto a CD/DVD to use them. You can make a bootable USB drive from the image and install the OS from the USB

There are two phases involved in the process - "Formatting the USB drive" and "Copying the installation files and making it bootable".

Formatting the USB drive

I am using the Diskpart command line utility in the following steps to format the USB drive. I guess even the Right-click and Format option can be used, but I haven't tried it. If you get into any trouble using the Format from the Right-click menu, put in a comment and I will look into it.

  1. Open the command prompt as an Administrator
  2. Start the DiskPart utility by typing in "diskpart". This will show a list of all the drives connected to the system. Identify the disk corresponding to the USB drive
  3. Select the USB drive using the "select ###" command
  4. Clean it using the "clean" command
  5. Create a primary partition on the USB using the "create partition primary" command. This is where we will be copying the installation files
  6. Select the partition using the "select partition 1" command
  7. Make it active by typing "active"
  8. Format the drive and create an NTFS filesystem on it using the "format fs=ntfs" command
  9. Assign a drive letter to the USB disk using the "assign" command
  10. Exit the DiskPart utility using the "exit" command

The USB is now formatted and is ready for the transfer.

Copying the installation files and making it bootable

The following steps are the crux of the process.

  1. Insert the Windows 7 DVD
  2. Navigate to the boot directory of the DVD
  3. There is a utility "Bootsect" which comes bundled in the Windows 7 DVD. This does the entire work for us
  4. Run the utility from the command line and specify the drive letter of the USB drive. bootsect.exe /NT60 X:
  5. The utility adds the appropriate boot-code to the USB drive
  6. Copy the contents of the DVD onto the USB drive

That's it, reboot the system, make sure that the USB boot is given the highest priority and the installation starts.

Saturday, August 07, 2010

Community TechDays August 2010

I spent my entire day today at Microsoft IDC, Hyderabad where I had a wonderful experience at Community TechDays organized by the Microsoft User Group, Hyderabad (MUGH). For people wondering what Community TechDays is - It is a multi-city event conducted quarterly by the technology community for Developers, Infrastructure Professionals, Architects, Project Managers and Students. This was the fourth Community TechDays series in India but the first one I attended.

The event started at around 9:30 and there were two parallels tracks - one for the Developers and another for the IT Professionals. Having registered for both, I attended the sessions which I found interesting in the two tracks. On a whole there were 5 sessions which I attended. I will give a brief description of the sessions I attended and will try to get the technical details on my Tech Blog sometime soon. But no promises on that :)

The first session was on the Data compression options in SQL Server 2008 by Arun Shankar which was pretty good. It covered how the database can be compressed during backup and various data compression techniques like row level compression and page level compression to reduce the size taken up the database. Following that was a session on IIS End to End by Muqeet Khan. To be honest, I didn't have much experience on IIS but had a reasonable exposure to web servers as a whole. So learnt a lot through the session and loved the concept of running languages like PHP and Java in IIS 7.0.

The next session I attended was on Cloud Services with Windows Azure Platform by Arun Ganesh. This was one session where content was very good but time was a major constraint. Initially he started with an introduction on Cloud Computing and a live demo on developing and deploying a sample application on Windows Azure. He also covered Azure Storage - Tables, Blobs and Queues with a live example. Though he introduced SQL Azure and Azure Services like Service Bus and Access Control, he couldn't cover a live example there. As I had prior experience in developing and deploying an Azure app, I was really looking forward for the demo on Service Bus and Access Control but was a bit disappointed at the end.

Post lunch, there was a session on WCF in .NET 4.0 and Windows Server AppFabric by Phani Kumar. This was a bouncer for me. With negligible hands-on experience on WCF, the session was way above my head. Though the speaker was explaining it well, me and my friends had a tough time co-relating it to what we did. The last session was more like an open discussion again with Muqeet Khan. It started with batch processing admin tasks in Power Shell and soon moved on how Windows works. It was more like a refresher of my OS class at college but more entertaining :D

The day ended with a curtain raiser of XBox Project Natal (more popularly known as Kinect for XBox 360) at around 4:30. I can't wait to get my hands on it when it comes to India in December this year.

Oh, the best part of the day - I guess we were among the youngest developers who came for the event today. It was actually interesting sitting in a crowd with people with work-experience of around 2-3 years with our work-experience of around 2-3 months ;)