Thursday, June 28, 2007

What is dead-lock....

What is deadlock and mechanism to prevent it…….

A deadlock is a situation wherein two or more competing actions are waiting for the other to finish, and thus neither ever does. It is often seen in a paradox like 'the chicken or the egg'.

In the computing world deadlock refers to a specific condition when two or more processes are each waiting for another to release a resource, or more than two processes are waiting for resources in a circular chain Deadlock is a common problem in multiprocessing where many processes share a specific type of mutually exclusive resource known as a software, or soft, lock. Computers intended for the time-sharing and/or real-time markets are often equipped with a hardware lock (or hard lock) which guarantees exclusive access to processes, forcing serialization. Deadlocks are particularly troubling because there is no general solution to avoid (soft) deadlocks.

This situation may be likened to two people who are drawing diagrams, with only one pencil and one ruler between them. If one person takes the pencil and the other takes the ruler, a deadlock occurs when the person with the pencil needs the ruler and the person with the ruler needs the pencil, before he can give up the ruler. Both requests can't be satisfied, so a deadlock occurs.

The telecommunication’s description of deadlock is a little stronger: deadlock occurs when none of the processes meet the condition to move to another state (as described in the process's finite state machine) and all the communication channels are empty. The second condition is often left out on other systems but is important in the telecommunication context.

Necessary conditions

There are four necessary conditions for a deadlock to occur, known as the Coffman conditions from their first description in a 1971 article by E. G. Coffman.

  1. Mutual exclusion condition: a resource is either assigned to one process or it is available
  2. Hold and wait condition: processes already holding resources may request new resources
  3. No preemption condition: only a process holding a resource may release it
  4. Circular wait condition: two or more processes form a circular chain where each process waits for a resource that the next process in the chain holds

Deadlock only occurs in systems where all 4 conditions happen.

Circular wait prevention

Circular wait prevention consists in allowing processes to wait for resources, but ensure that the waiting can't be circular. One approach might be to assign a precedence to each resource and force processes to allocate resources in order of increasing precedence. That is to say that if a process holds some resources and the highest precedence of these resources is m, then this process cannot request any resource with precedence smaller than m. This forces resource allocation to follow a particular and non-circular ordering, so circular wait cannot occur. Another approach is to allow holding only one resource per process; if a process requests another resource, it must first free the one it's currently holding (or hold-and-wait).


An example of a deadlock which may occur in database products is the following. Client applications using the database may require exclusive access to a table, and in order to gain exclusive access they ask for a lock. If one client application holds a lock on a table and attempts to obtain the lock on a second table that is already held by a second client application, this may lead to deadlock if the second application then attempts to obtain the lock that is held by the first application. (But this particular type of deadlock is easily prevented, e.g., by using an all-or-none resource allocation algorithm.)

Another example might be a text formatting program that accepts text sent to it to be processed and then returns the results, but does so only after receiving "enough" text to work on (e.g. 1KB). A text editor program is written that sends the formatter with some text and then waits for the results. In this case a deadlock may occur on the last block of text. Since the formatter may not have sufficient text for processing, it will suspend itself while waiting for the additional text, which will never arrive since the text editor has sent it all of the text it has. Meanwhile, the text editor is itself suspended waiting for the last output from the formatter. This type of deadlock is sometimes referred to as a deadly embrace (properly used only when only two applications are involved) or starvation. However, this situation, too, is easily prevented by having the text editor send a forcing message (eg. EOF) with its last (partial) block of text, which will force the formatter to return the last (partial) block after formatting, and not wait for additional text.

Nevertheless, since there is no general solution for deadlock prevention, each type of deadlock must be anticipated and specially prevented. But general algorithms can be implemented within the operating system so that if one or more applications becomes blocked, it will usually be terminated after a time (and, in the meantime, is allowed no other resources and may need to surrender those it already has, rolled back to a state prior to being obtained by the application).


Deadlock can be avoided if certain information about processes is available in advance of resource allocation. For every resource request, the system sees if granting the request will mean that the system will enter an unsafe state, meaning a state that could result in deadlock. The system then only grants request that will lead to safe states. In order for the system to be able to figure out whether the next state will be safe or unsafe, it must know in advance at any time the number and type of all resources in existence, available, and requested. One known algorithm that is used for deadlock avoidance is the Banker's algorithm, which requires resource usage limit to be known in advance. However, for many systems it is impossible to know in advance what every process will request. This means that deadlock avoidance is often impossible.

Two other algorithms are Wait/Die and Wound/Wait, each of which is uses a symmetry-breaking technique. In both these algorithms there exists an older process (O) and a younger process (Y). Process age can be determined by a time stamp at process creation time. Smaller time stamps are older processes, while larger timestamps represent younger processes.



O is waiting for a resource that is being held by Y

O waits

Y dies

Y is waiting for a resource that is being held by O

Y dies

Y waits

It is important to note that a process may be in unsafe state but would not result in a deadlock. The notion of safe/unsafe state only refers to the ability of the system to enter a deadlock state or not. For example, if a process requests A which would result in an unsafe state, but releases B which would prevent circular wait, then the state is unsafe but the system is not in deadlock.


Deadlocks can be prevented by ensuring that at least one of the following four conditions occur:

  • Removing the mutual exclusion condition means that no process may have exclusive access to a resource. This proves impossible for resources that cannot be spooled, and even with spooled resources deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms.
  • The "hold and wait" conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations); this advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to release all their resources before requesting all the resources they will need. This too is often impractical. (Such algorithms, such as serializing tokens, are known as the all-or-none algorithms.)
  • A "no preemption" (lockout) condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, inability to enforce preemption may interfere with a priority algorithm. (Note: Preemption of a "locked out" resource generally implies a rollback, and is to be avoided, since it is very costly in overhead.) Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control.
  • The circular wait condition: Algorithms that avoid circular waits include "disable interrupts during critical sections" , and "use a hierarchy to determine a partial ordering of resources" (where no obvious hierarchy exists, even the memory address of resources has been used to determine ordering) and Dijkstra's solution.


Often neither deadlock avoidance nor deadlock prevention may be used. Instead deadlock detection and process restart are used by employing an algorithm that tracks resource allocation and process states, and rolls back and restarts one or more of the processes in order to remove the deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler or OS.

Detecting the possibility of a deadlock before it occurs is much more difficult and is, in fact, generally undecidable, because the halting problem can be rephrased as a deadlock scenario. However, in specific environments, using specific means of locking resources, deadlock detection may be decidable. In the general case, it is not possible to distinguish between algorithms that are merely waiting for a very unlikely set of circumstances to occur and algorithms that will never finish because of deadlock.

Distributed deadlock

Distributed deadlocks can occur in distributed systems when distributed transactions or concurrency control is being used. Distributed deadlocks can be detected either by constructing a global wait-for graph from local wait-for graphs at a deadlock detector or by a distributed algorithm like edge chasing.

Phantom deadlocks are deadlocks that are detected in a distributed system but don't actually exist - they have either been already resolved or no longer exist due to transactions aborting.


A livelock is similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing. Livelock is a special case of resource starvation; the general definition only states that a specific process is not progressing.

As a real-world example, livelock occurs when two people meet in a narrow corridor, and each tries to be polite by moving aside to let the other pass, but they end up swaying from side to side without making any progress because they always both move the same way at the same time.

Livelock is a risk with some algorithms that detect and recover from deadlock. If more than one process takes action, the deadlock detection algorithm can repeatedly trigger. This can be avoided by ensuring that only one process (chosen randomly or by priority) takes action.

Wednesday, June 20, 2007

How to put down your Mac machines into sleep....?

Two Seconds to Sleep (Its a Mac Magic)

Fastest way to put your Mac right into a deep, sleepy-bear hibernation-like sleep (no whirling fan, no dialogs, no sound — nothing’ — just fast, glorious sleep). Just press Command-Option and then hold the Eject button for about 2 seconds and you can see the Mac goes into sleep……..

It doesn’t get much faster than that.

Tuesday, June 19, 2007

What is Dangling pointers in the world of C, C++ and VC++ languages?

Dangling pointer

Dangling pointers and wild pointers in computer programming are pointers that do not point to a valid object of the appropriate type, or to a distinguished null pointer value in languages which support this. Dangling pointers arise when an object is deleted or de-allocated, without modifying the value of the pointer, so that the pointer still points to the memory location of the de-allocated memory. As the system may reallocate the previously freed memory to another process, if the original program then dereferences the (now) dangling pointer, unpredictable behavior may result, as the memory may now contain completely different data. This is especially the case if the program writes data to memory pointed by a dangling pointer, as silent corruption of unrelated data may result, leading to subtle bugs that can be extremely difficult to find, or general protection faults (Windows). If the overwritten data is bookkeeping data used by the system's memory allocator, the corruption can cause system instabilities. Wild pointers arise when a pointer is used prior to initialization to some known state, which is possible in some programming languages. They show the same erratic behavior as dangling pointers, though they are less likely to stay undetected.

Cause of dangling pointers

In many languages (particularly the C programming language, which assumes the programmer will take care of all design issues, and hence do not include many of the checks that are present in higher-level languages), deleting an object from memory explicitly or by destroying the stack frame on return does not alter any associated pointers. The pointer still points to the location in memory where the object or data was, even though the object or data has since been deleted and the memory may now be used for other purposes, creating a dangling pointer.

A straightforward example is shown below:

           char *cp = NULL;
           /* ... */
               char c;
               cp = &c;
           } /* The memory location, which c was occupying, is released here */          
           /* cp here is now a dangling pointer */

In the above, one solution to avoid the dangling pointer is to make cp a null pointer after the inner block is exited, or to otherwise guarantee that cp won't be used again without further initialization in the code which follows.

Another frequent source of creating dangling pointers is a jumbled combination of malloc () and free () library calls. In such a case, a pointer becomes dangling when the block of memory it points to is freed. As with the previous example, one way to avoid this is to make sure to set the pointer back to null after freeing the memory, as demonstrated below:

           char *cp = malloc ( A_CONST );
           /* ... */
           free ( cp );      /* cp now becomes a dangling pointer */
           cp = NULL;        /* cp is no longer dangling */
           /* ... */

Lastly, a common programming misstep to create a dangling pointer is returning the address of a local variable. Since local variables are de-allocated when the function returns, any pointers that point to local variables will become dangling pointers once the stack frame is de-allocated.

       char * func ( void )
           char ca[] = "Pointers and Arrays - II";
           /* ... */
           return ca;

If it is required to return the address of ca, it should be declared with the static storage specifier.

Cause of wild pointers

Wild pointers are created by omitting necessary initialization prior first use. Thus, strictly speaking, every pointer in programming languages which do not enforce initialization begins as a wild pointer.

This most often occurs due to jumping over the initialization, not by omitting it. Most compilers are able to warn about this.

Security holes involving dangling pointers

Like buffer overflow bugs, dangling/wild pointer bugs are frequently security holes. For example, if the pointer is used to make a virtual function call, a different address (possibly pointing at exploit code) may be called due to the vtable pointer being overwritten. Alternatively, if the pointer is used for writing to memory, some other data structure may be corrupted. Even if the memory is only read once the pointer becomes dangling, it can lead to information leaks (if interesting data is put in the next structure allocated there) or privilege escalation (if the now-invalid memory is used in security checks).

Avoiding dangling pointer errors

A popular technique to avoid dangling pointers is to use smart pointers. A smart pointer typically uses reference counting to reclaim objects. Some other techniques include the tombstones method and the locks-and-keys method.

One alternative is to use the DieHard memory allocator[1], which virtually eliminates dangling pointer errors, as well as a variety of other memory errors (like invalid and double frees).

In languages like Java, dangling pointers cannot occur because there is no mechanism to explicitly de-allocate memory. Rather, the garbage collector may de-allocate memory, but only when the object is no longer reachable from any references.

Dangling pointer detection

To expose dangling pointer errors, one common programming technique is to set pointers to the null pointer or to an invalid address once the storage they point to has been released. When the null pointer is dereferences (in most languages) the program will immediately terminate — there is no potential for data corruption or unpredictable behavior. This makes the underlying programming mistake easier to find and resolve. This technique does not help when there are multiple copies of the pointer.

Some debuggers will automatically overwrite and destroy data that has been freed, usually with a specific pattern, such as 0xdeadbeef (Microsoft's Visual C/C++ debugger, for example, uses 0xCC, 0xCD or 0xDD depending on what has been freed). This usually prevents the data from being reused by making it useless and also very prominent (the pattern serves to show the programmer that the memory has already been freed).

How to resolve memory leakages in C++ language?

Solution is Smart Pointer

A Smart pointer is an abstract data type that simulates a pointer while providing additional features, such as automatic garbage collection or bounds checking. These additional features are intended to reduce bugs caused by the use of pointers while retaining efficiency. Smart pointers typically keep track of the objects they point to for the purpose of memory management.

The use of pointers is a major source of bugs: the constant allocation, de-allocation and referencing that must be performed by a program written using pointers makes it very likely that some memory leaks will occur. Smart pointers try to prevent memory leaks by making the resource de-allocation automatic: when the pointer to an object (or the last in a series of pointers) is destroyed, for example because it goes out of scope, the pointed object is destroyed too.

Several types of smart pointers exist. Some work with reference counting, others assigning ownership of the object to a single pointer. If the language supports automatic garbage collection (for instance, Java), then this use of a smart pointer is unnecessary.

In C++ language, smart pointers may be implemented as a template class that mimics, by means of operator overloading, the behavior of traditional (raw) pointers, (e.g.: dereferencing, assignment) while providing additional memory management algorithms.

Smart pointers can facilitate intentional programming by expressing the use of a pointer in the type itself. For example, if a C++ function returns a pointer, there is no way to know whether the caller should delete the memory pointed to when the caller is finished with the information.

some_type* ambiguous_function(); // What should be done with the result?

Traditionally, this has been solved with comments, but this can be error-prone. By returning a C++ auto_ptr,

auto_ptr obvious_function1();

the function makes explicit that the caller will take ownership of the result, and further more, that if the caller does nothing, no memory will be leaked. Similarly, if the intention is to return a pointer to an object managed elsewhere, the function could return by reference:

some_type& obvious_function2();


Let SmartPointer be a template smart pointer for class X.

void test_smartpointers()
 //first, we create two objects and we keep raw pointers to them
 //since these pointers are not smart, they will not affect the object lifecycle
 Object* obj_1 = new Object();               
 Object* obj_2 = new Object();       
 //then we declare two smart pointers and we assign them with the objects
 //both obj_1 and obj_2 will have counter==1
 SmartPointer p = obj_1;                  
 SmartPointer q = obj_2;
 //now we assign p into q, yielding obj_1.counter==2
 //obj_2 will be destroyed because its counter reaches 0
 q = p;
 //we assign q with NULL
 //obj_1.counter reaches 1
 q = NULL;
 //now we create a new object, and we assign its address to the smart pointer
 //it will be automatically destroyed before leaving the scope
 //obj_1 will be destroyed because its counter reaches 0
 p = new Object();
 //finally, we create another object which will be only referenced by a raw pointer.
 //obj_3 will be lost and there will be a memory leak
 Object* obj_3 = new Object();               

Saturday, June 16, 2007

Mac OS 10.x Cocoa programming - Article 1

Memory Management with Cocoa objects

By: Elango C

This article is a primer on managing the allocation and de-allocation of objects (and therefore the memory they use) in the context of applications built using Apple's Foundation framework, and other frameworks that rely upon it, such as EOF & WebObjects. It describes how to use Foundation's memory management infrastructure, including the reference counting mechanism and auto release pools, syntactic notations, object ownership as it pertains to memory management, common pitfalls, and good programming practices.

This article is intended for programmers who are new or have some limited exposure to Apple's frameworks, but have some Object-Oriented programming experience and are familiar with Objective-C, which is used for all the examples. OO concepts and terminology are liberally used below.

Objects Seen As Memory

Objects, or instances of Classes, are unique by virtue of being distinct fragments of memory that contain the state for each instance. Therefore, the creation and deletion of an object is equivalent to the allocation and de-allocation of the memory it occupies. The Foundation framework, upon which all other frameworks are built, provides reference counting for objects, as well as a delayed object disposal mechanism, by means of a root class, NSObject, as well as an Objective-C protocol of the same name that other classes can adopt. Most classes in Apple's frameworks and in applications built on them are subclasses of NSObject or conform to the NSObject protocol, and can therefore avail themselves of this infrastructure.

Since Apple's frameworks expose their functionality in the form of classes (though there are some C functions and struts), memory management is cast in terms of object creation and disposal. Over the spectrum of memory management methods ranging from the malloc/free of the C world to automatic Garbage Collection in Smalltalk & Java, Foundation's reference counting & delayed disposal lie somewhere in the middle.

Object Ownership

Foundation and other frameworks suggest a policy for creating and disposing objects:

  • If you create an object, you are responsible for disposing it.
  • If you want an object you didn't create to stay around, you must "retain" it and then "release" it when you no longer need it.

The idea being that the creator of an object is its owner and only the owner of an object may destroy it. Consistently adopting this policy makes code simpler, more robust and avoids problems such as references to destroyed objects or leaks. Note, though, that there is some subtlety here. By using NSAutoreleasePool, the delayed disposal mechanism, the creator of an object is technically delegating responsibility for its destruction to the NSAutoreleasePool. The term "Object Ownership" is somewhat misleading in this regard.

Object Allocation & Initialization

SomeClass *anInstance = [[SomeClass alloc] init];

is the standard idiom for creating an object by first allocating memory for it and then initializing it. On Operating Systems that understand the notion of memory "zones" (such as Mach), the allocWithZone: method attempts to allocate memory from within the specified zone to improve locality of reference. Subclasses of NSObject with state must also typically implement extended initialization methods, e.g.:

@interface CartesianCoordinate : NSObject
        NSNumber *abscissa;
        NSNumber *ordinate;
- (CartesianCoordinate *) initWithAbscissa: (NSNumber *)anAbscissa
                                 ordinate: (NSNumber *)anOrdinate;

NSObject also provides the copy, mutableCopy, copyWithZone: and mutableCopyWithZone: methods that make identical copies of an object by allocating memory and duplicating the object's state.

Object Disposal

You indicate that you are no longer interested in an object by sending it the release message. When nobody is interested in an object, i.e., when there are no external references to it, it is de-allocated by sending it the dealloc message. Classes with state are responsible for cleaning up by releasing any other objects they in turn may be retaining in their dealloc implementations:

@implementation CartesianCoordinate
- (void) dealloc
        [abscissa release];
        [ordinate release];
        return [super dealloc];

Object Reference Counting

Reference counting is extremely simple (modulo Distributed Objects, which can get a mite hairy). Each object has a "retain count" associated with it, that counts external references to it. When an object is initially created using init, initWith..., or one of the copy methods, it has an implicit retain count of 1. Other objects can "retain" it by sending the retain message, which increments the retain count. Each release message correspondingly decrements the retain count. When the count reaches 0, the object is de-allocated. You can examine the retain count of an object by sending it the retainCount message.

In the following example, an object (alertString) is created, used, and then disposed of:

- (void) notifyUserOfError: (NSString *)errorString
        NSMutableString *alertString = nil;
        alertString = [[NSMutableString alloc] initWithString:
                        @"The following error occurred: "];
        [alertString appendString: errorString];
        NSRunAlertPanel( alertString ...);
        [alertString release];

Temporary Objects

As you can see in the code fragment above, it is often necessary to create throw-away objects that are used once and then destroyed. This is simple when the scope is well-defined, as above. But what if the temporary object has to be returned to the caller? Commons idioms for dealing with this in C are to use statically allocated buffers or return dynamically allocated memory which the caller is then responsible for freeing. Foundation provides a somewhat more elegant solution by means of a delayed disposal mechanism that allows the creation of temporary objects which eventually go away auto -magically. Here's the same method rewritten:

- (void) notifyUserOfError: (NSString *)errorString
        NSMutableString *alertString = nil;
        alertString = [NSMutableString stringWithString:
                        @"The following error occurred: "];
        [alertString appendString: errorString];
        NSRunAlertPanel( alertString ...);

As you can see, the alertString is not sent a release message after it is used. Callers of this method need not worry about disposing alertString. Because of the way it was created, it is an "autoreleased" object and will go away eventually. An autoreleased object is simply one that will automatically receive a release message at some point in the future. Autoreleased objects hence have a finite lifetime and will be destroyed unless explicitly retained. You autorelease an object by sending it a (surprise) autorelease message. In the code fragment above, the line

alertString = [NSMutableString stringWithString:
                    @"The following error occurred: "];

is exactly the same as

alertString = [[[NSMutableString alloc] initWithString:
            @"The following error occurred: "] autorelease];

Per Foundation method naming conventions, creation conveniences such as stringWithString: always return autoreleased instances.

Gory Autorelease Details

Though autoreleasing an object is conceptually simple, it is useful to know more about how the mechanism works. Each application has a number of NSAutoreleasePool objects, which, as their name suggests, are collections of autoreleased objects. Sending autorelease to an object adds it to an NSAutoreleasePool. At some point in the future, typically at the end of the event loop in Foundation and AppKit applications, or at the end of the request-response loop in WebObjects applications, the NSAutoreleasePool sends release to all its objects (when it is itself released). Notice that NSAutoreleasePool is mentioned in the plural. Why would there be more than one? Because being able to scope the lifetime of objects is sometimes very useful, autorelease pools are stackable. Multi-threaded applications can have a stack of pools per thread. If you are creating a large number of temporary objects that are only valid within a very tight context such as a loop, and don't want those objects to hog memory until much later on, you can create an autorelease pool that is local to that context:

- (id) findSomething
        id theObject = nil;
        // Whatever we're looking for
        NSAutoreleasePool *localPool = [[NSAutoreleasePool alloc] init];
        // Autoreleased objects are now automatically placed in localPool.
        // Loop that creates many temporary objects
        while ( theObject == nil )
            if ( [temporaryObject matchesSomeCondition] )
                theObject = [temporaryObject retain];
                // We want this one
        // Get rid of all those temporary objects
        [localPool release];
        return [theObject autorelease];

Notice that by sending the temporaryObject we are interested in a retain message, we extend its life beyond that of localPool, and then again autorelease it before returning it, so that it is eventually disposed of.

Here is a more sophisticated example involving stacked pools:

- (NSArray *) findAListOfThings
        NSMutableArray *thingArray =
            [[NSMutableArray alloc] initWithCapacity: 25];
        // The list of 25 things we're looking for
        NSAutoreleasePool *outerPool = [[NSAutoreleasePool alloc] init];
        NSAutoreleasePool *innerPool = nil;
        NSArray *largeObjectArray = nil;
        id temporaryObject = nil;
        NSEnumerator *arrayEnumerator = nil;
        // Loops that create many temporary objects
        while ( [thingArray count] != 25 )
            largeObjectArray = [self fetchLotsOfObjects];
            // largeObjectArray is autoreleased and contained in the
            // outer autorelease pool
            arrayEnumerator = [largeObjectArray objectEnumerator];
            // Note that the enumerator itself is a temporary object!
            // It will be released by the outerPool
            // Create the inner pool on each iteration. When
            // a pool is created, it automatically becomes the
            // "top" pool on the current thread's stack of pools.
            innerPool = [[NSAutoreleasePool alloc] init];
            // autoreleased objects now go into innerPool
            while ( temporaryObject = [arrayEnumerator nextObject] )
                if ( [temporaryObject matchesSomeCondition] )
                    [thingArray addObject: temporaryObject];
                    // Collections retain their members
            // Dispose temporary objects created on this iteration;
            // Note that the objects added to thingArray during this
            // iteration are also in innerPool and thus sent a release
            // message, but are not destroyed because they have been
            // retained by thingArray and so have an additional reference
            // (their retainCount > 1)
            [innerPool release];
        [outerPool release];
        return [thingArray autorelease];

Common Pitfalls

Here are some of the more straightforward mistakes made when using retain, release, and autorelease:

Releasing an object you didn't create:

@implementation Warden
- (void) chastizePrisonerNamed: (NSString *)aName
        Prisoner *thePrisoner = [Prisoner prisonerWithName: aName];
        // ...
        // Many a tense moment later
        [thePrisoner release];    // Ugh! thePrisoner isn't ours to release.

How do we know that thePrisoner is autoreleased? Remember, other than the alloc..., copy..., and mutableCopy... methods, all class creation methods return autoreleased objects with a retain count of 1. Thus thePrisoner will automatically get a release message later on, taking it's retain count to 0 and deallocating it.

Not retaining autoreleased objects that you need beyond the present context:

@implementation Slacker
- (void) goofOff
        myRationale = [Rationale randomRationale];
        // myRationale is an instance variable
        sleep( rand( 7200 ) );
        // Do more slacker stuff
// Later on
- (void) justifyTimeToPointyHairedBoss
        [self blurtOut: [myRationale description]];
        // Ugh! myRationale may no longer exist!

The more correct thing to do here is

   myRationale = [[Rationale randomRationale] retain];

or better yet,

   [self setMyRationale: [Rationale randomRationale]];

Returning temporary objects that you created without first autoreleasing them:

- (Emotion *) emotionForDate: (NSDate *)aDate
        Emotion *theEmotion = nil;
        // Compute an emotion
        theEmotion = [[Emotion alloc] initWithType:
                        ( rand( hash( [aDate stringValue] ) )];
        return theEmotion;
        // Ugh! You are responsible for disposing your creations

Writing sloppy accessors:

- (void) setGame: (Game *)newGame
        [game release];
        game = [newGame retain];
        // Ugh! What if game == newGame?

Retain cycles, i.e., objectA retains objectB and objectB retains objectA. Avoiding retain cycles is a matter of good design and clear object ownership paradigms. In general, ownership should be unidirectional. For example, it makes sense for a collection to retain its members. It doesn't for each member to retain the collection.

Useful Idioms

Always use accessor methods when referencing instance variables, even within your own class implementation! It is tempting to directly manipulate one's instance variables, but easy to forget to retain values, release previously referenced objects, and for multi-threaded applications, return references to destroyed objects. The ubiquitous use of accessors also makes it easy to differentiate between instance, automatic, and global variables, and makes code easier to read.

Don't use autorelease in accessor implementations. It's tempting to write set methods like this:

- (void) setTheory: (Theory *)value
        [theory autorelease];
        theory = [value retain];

But autoreleasing an object is an expensive operation, and should only be used when there is uncertainty about an object's lifespan. When invoking a set method, you are no longer interested in the currently referenced object, so immediately releasing it is the correct thing to do. This approach has the added benefit of exposing extra-release problems that might otherwise not appear during testing because the old, autoreleased object is still around. Here is a prototype for an efficient, if somewhat verbose, set method:

- (void) setTheory: (Theory *)newTheory
        Theory *oldTheory = nil;
        if ( theory != newTheory )        // If they're the same, do nothing
            [self willChange];            // For Enterprise Objects only
            oldTheory = theory;        // Copy the reference
            theory = [newTheory retain];// First retain the new object
            [oldTheory release];        // Then release the old object

For classes with shared or singleton instances, always reference the instance via an accessor that will create it as necessary:

@implementation Earth
static Earth *_sharedEarth = nil;
+ (Earth *) sharedEarth
        if ( _sharedEarth == nil )
            _sharedEarth = [[Earth alloc] initWithTrees:  ... ];
        return _sharedEarth;

Friday, June 15, 2007

How improve system performance?

How improve system performance? By disabling DOS 8.3 naming convention

I briefly mentioned MFT fragmentation in a previous article (Refer How to improve the disk performance (NTFS)? article).

So what is the cause of fragmentation? Well most common cause is too much of use. As with anything excessive use causes fragmentation. Activities of add/update/delete to a section of disk would invariable cause it to fragment. So there is no permanent solution as we can not avoid these acts. As such it is a good idea to use disk defragmenter regularly.

Contiguous data which results from defragmenting disk, improves system performance considerably. But what I am suggesting you here would prolong intervals between defragmentations resulting in more time for your own productive work. OK! This tip is for those folks would never be using a DOS based program nor doesn’t care for connection from DOS based operating systems (example: old games and all things before windows 95).

In Windows XP, two file names are created for each file one is the actual name and another one is 8.3 version of that file name for compatibility with DOS based programs. Now this work name in the name of compatibility takes quite a lot of system resources specifically CPU time and disk space. But this is not it, it also increases your MFT utilization and fragmentation. So the solution is to disable it. How to do this?

Open Registry using Regedit.exe and Navigate to


In right pane, look for key by the name "NtfsDisable8dot3NameCreation" and sets its value to 1. That’s it.

You would see improvement in system performance for sure.

IMPORTANT: This procedure contains information about modifying the registry. Before you modify the registry, make sure to back it up and make sure that you understand how to restore the registry if a problem occurs. For information about how to back up, restore, and edit the registry, click the following article numbers to view the Microsoft Knowledge Base articles:

256986 - Description of the Microsoft Windows Registry

322756 - HOW TO: Back Up, Edit, and Restore the Registry in Windows XP.

Wednesday, June 13, 2007

How to improve the disk performance (NTFS)?

MFT (Master File Table) manipulations to improve disk performance

MFT stands for Master File Table.

Typically in Windows XP, if you are using NTFS (I would recommend it if you don't) by default NTFS would reserve 12.5% of your free disk space for MFT. MFT fragmentation could also cause a significant slow down. Let me discuss size first. Now if you have installed tons of different programs on your hard disk (or intend to do so), MFT utilization is going to be high. Under such situation, it may be beneficial to increase this percentage to say 25%. If you want to do this, here is the trick.

Open Registry using REGEDIT.EXE (you should be a administrator to do read and write operations in registry) and Navigate to


In right pane, add one more key by the name "NtfsMftZoneReservation" with the REG_DWORD value of 2. DWORD value of 1 is interpreted as 12.5%, 2 as 25% and so on.

Now you can feel the difference in terms of files/folders access (Read... and Write...) in the NTFS partition(s).

IMPORTANT: This procedure contains information about modifying the registry. Before you modify the registry, make sure to back it up and make sure that you understand how to restore the registry if a problem occurs. For information about how to back up, restore, and edit the registry, click the following article numbers to view the Microsoft Knowledge Base articles:

256986 - Description of the Microsoft Windows Registry

322756 - HOW TO: Back Up, Edit, and Restore the Registry in Windows XP.

Monday, June 11, 2007

How to speed up the WinXP Boot - Up time

Having problem with slow boot-up time or is it taking long period to resume?

There are variety of reasons why your windows XP system would

boot slowly. Most of the times it this has to do with the startup


If you would like to speed up the boot up sequence, consider removing

some of the startup applications that you do not need. Easiest way to

remove startup apps is through System Configuration Utility

MSConfig.exe (Launch application by START -> RUN and enter


In MsConfig.exe application choose STARTUP tab and deselecting

application(s) that you do not want to startup at boot time. If this

works, great!!! If not you can also look into SERVICES tab and

possibly deselect WORKSTATION option and see whether it helped or

not in terms of booting performance.

To know more about boot time and what Microsoft is doing about it?

Visit Microsoft web site on fast boot /fast resume at

And go to download section and you would see software called

Bootvis.exe (i.e. it is Microsoft Boot Performance trace visualization

tool and has option to optimize your boot sequence).

How to remove recycle bin from your desktop?

How to remove recycle bin from your desktop?

To do this you need to edit the Windows Registry by doing following


Start Regedit.exe application. (Click on Start -> Run and type the

application you want to open, in our case Regedit and hit enter.

Then navigate to following entry in registry



00AA002F954E} and delete it.

This action should remove recycle bin from your desktop.

Now feel the differences……….

IMPORTANT: This procedure contains information about modifying the registry. Before you modify the registry, make sure to back it up and make sure that you understand how to restore the registry if a problem occurs. For information about how to back up, restore, and edit the registry, click the following article numbers to view the Microsoft Knowledge Base articles:

256986 -
Description of the Microsoft Windows Registry

- HOW TO: Back Up, Edit, and Restore the Registry in Windows XP.

How to win free cell and Solitaire game instantly

How to win the FreeCell game effortlessly

Just Hold down Ctrl + Shift + F10 during game play.

Then you will be asked if you want to Abort, Retry or Ignore.

Choose Abort, and then move any card and see a window popup saying….. you have won the game…….

Enjoy ………………..

How to win the Solitaire game effortlessly

Just Hold down the Alt + Shift + 2 during game play…..

you ll see the cards are bumping down from the cells…………

yes you won the game….. try it out …. And enjoy…………..