Cube game Formula

Rubick’s Cube Game Formula


For all those who wondered how they solve that Cube, well here you go..

Consider Learning Movements of Sides first before starting off with using the formula…


Second line R H Side

T–, R +, T +, R –, C –, T +, F + T –

Second Line LH Side

T +, L –, T –, L + C +, T –, F –, T +

(It Centre line is matched do the final Piece)

Second Line Final Piece RH Side

T –, R +, T +, R –,T +, M –, F +, M +, T –

Second Line Final Piece LH Side

T +, L –, T –, L+, T –, M –, F –, M +, T +

(Match one center piece and keep at left side like this: ¿)

Third row for Plus Sign

R –, T –, F –, T +, F +, R +,

Third Row for Plus Sign Matching

R2, U –, M + U2, M +, U2, M2, U +, R2

Third row for Corner Pieces (Maximum 2)

T +, R +, T –, L–, T +, R –, T –, L +

(Use above line for many times till you get linear positions)

(If you get Diagonal pieces like this:      then do final positioning)

Final Positioning of all 6 Colors

R +, B2, R –, T –, B2, T +, F +, T –, B2, T +, R +, B2, R –, F –

R +, RH, Clock wise                                               U+, Under (Bottom) Clock wise

R –, RH Anti Clock wise                           U –, Under (Bottom) Anti Clock wise

L +, LH Clock wise                                      M +, (Vertical) Middle towards operator

L –, LH Anti Clock wise                            M –, (vertical) Middle away from operator

F +, Front Clock wise                                C +, (Center) (Horizontal) Clock wise

F –, Front Anti Clock wise                                  C –, (Horizontal) (Centre) Anti clock wise

T +, Top Clock wise                                               B +, Back Clock wise

T –, Top Anti Clock wise                          B –, Back Anti Clock wise

P.S. Don’t ask me to teach it. It takes time and practice to learn to learn it..


Redhat Workshop

Last sunday I conducted a workshop on basics of Blender at Redhat, as part of the Chamba initiative.

I tried covering the contents as below:

  1. Introduction to Blender Environment
  2. Modelling
  3. Object and Operations
  4. Lighting
  5. Textures
  6. Animation Basics
  7. Game Engine Basics
For more details head over this wiki page

Android Workshop

Completed a three weekend workshop on Android today..
There were participants from my college, PICT , VIT, some MCA, and MCS students over from karrox, and DY Patil COE, and some of my acquitances.

The workshop was done in three sundays, that is three legs
First Leg covered basics of Android Development, the Environment behind it, and much more. And a head start to components in Android.

The Second Day concentrated on Intents, Multiple Activities, Cross Activity workings, and coding sessions

The Last day was spent on revision, and App Creation, also some topics like “Touch”, “Networking”, JSON Parsing, Twitter Timeline App were done..

Altogether I got a good response from around 12-20 students.

The examples in this workshop can be found over here..

Synopsis -Blunt


Relevant mathematics associated with the Project:

1. Let S be a system defined for the overview of git-torrent system

S={GR,GR’,T,D,C,E,Files | Фs }


GR = Git Repository

GR’= Changed Final Git Repository


D= Diff Architecture

C=Choice Architecture

Files is a set of Files in any directory structure

E= Exceptions

Фs= Rules

Is= GR’ ={Ф}, δ(GR) {Changes over GR}

T{ф} , s∩T≠ {ф}

Fs= GR’ C GR , GR’ C δ(GR) , p=s


δ= changes done

p= peer

s= seed

f(GR,D,T,C | G ε T) ->f(GR, δ(GR)) -> GR’

t(GR) => diff(GR)
lim GR → CO

dif(files) → GR w.r.t. CO


g Є GR


, g' Є GR' 








|) GR' 


=> Files



Input = {GR,GR’,T,D,C}

Output = {GR’,T’}


2. Let GR be a System defined for Git Repository

GR = {CO,REV, REF,t,A,L,Files,Ws | t εTREE}


CO= Commit Object

REV= Revision

REF= Reference



Ws= window size for break up

m(t,CO,A)-> t’

trans(t, REF, REV)->(REF’->t’)

size(Files) > Ws

{Refs} subset of CO
dif(files), lim files -> CO -> Ws

Output :
wsCO =files , dif(GR,GR')=CO'







3. Let T be a system over Torrent

T={P,S,TRAC,OM,PWP,THP,CON,gi,d ,t | d ε D , gi ε GI,t ε Tree}




TRAC= Tracker

CON= Connectivity

GI= Git Interface

S is proper subset of P
|s| lim→|co| 
f(P,t) → S
f(Pc',t) → P'
Pc'=(P ^ δ(P')) OR δ(P') => (GR')
f(Pc',GR') => Gr



dif(con') , where con' is subset of S
for lim S → Pcon(S)=GR'
f(tree,P) → Pl'=S

4. Let D be the diff architecture,

D={CO, CO’,CM,LOG,gi | gi ε GI}


CO=Commit Object

CO’= Changes over CO

CM=Context Mapping


dif(CO) → CO’ for lim CO‘ ->0

CO’={cod,0| cod is proper subset of CO}


5. C define the Choice architecture used to determine, which of the seeds to use in a swarm.

C={Sc, GR’, d, Sc’,TRAC |d ε D}

Sc=Set of choose Seeds




6. Let CO be a system for Commit Object Management

CO={Delta,State,HASH,Data,d,a,log | a ε A, log ε LOG, d ε D}




data is subset of files
Is-> state = committed a≠{ф}
delta:d {onto, one-to-one , one-to-many relation} 
dif(data)=delta for lim delta → 0
f(CO') → log *changed CO → for delta


7. Let REF be the reference model,

REF={sw,t,Tspec, Tsw, Tsw’ , Sw’,spec | t ε T, Tsw’ is subset of tree}


sw= Switch

Spec= Specification

f(sw ,sw’,Tsw)->Tsw’

f(sw’ ,Tspec)->Tsw’(TSPEC)

f(Spec,sw )->sw’

Tsw= Current Tree

Tsw’= Switched Tree

8. Let REV be for Revisioning

REV={sn,t,n | t ε Tree, sn ε SN}


sn=Snapshot of project



Tsn=Snapshot Tree

generating of RC and releases 
added a here
Exception condition here - > R(Sn


) → n




Same versionsnapshotting, Ignore this to let n


be as it is.


9. Let A be used for Authentication

A={SSH, gr, At, User | SSH(user), gr ε GR}

f(gr, SSH)->gr’


ND for invalid user of A


10. LOG={u,gr,co,l ,GR’ ,l’| u ε user, gr ε GR,co ε CO, l ε LOG}


 l = ∑ l

i i=0

l'=f() U l, f and l increase monotonically


10. Let P be a set for Peer Functionality






and S≠{ф}


Fs = L’ -> S

11.THP define the Tracker HTTP Protocol working

THP={TRAC,m,pl,ml,u }




pl=Peer List

ml= Mask List

and pl≠{ф}






12. Peer Wire Protocol is given by

PWP={gr,pl,ll,sl,m | gr ε GR,}


sl=Seed List

ll=Leech List



sl is proper subset of pl
PWP →∫ m = u

Problem statement feasibility assessment using satisfiability analysis and NP Hard, NP Complete or P type using modern algebra:

We will be considering Section 6, to prove our problem to be NP-Complete.
1. The Problem is NP- 
This one states we have an NP problem, verification of required data from Se' is possible in O(n


) {k=constant} time. 
Derivation of Se' cannot be done as such in O(n


As such the Problem in in NP

2.We know that the Vertex- Cover{VC} Problem is NP-HARD
Using vertex – cover anology here,
we need to find Se'(cover) from Se(Graph G) that gives reachability (GR').
Thus, V-C is sub module call for this function {∏}.

 Thus we can say,> V-C Problem

but V-C is NP-HARD.

Thus ∏ , being harder than it, also falls into NP-HARD.for Se' is Turing complete, and Turing computable .
This can be verified in O(n)
Thus, as ∏ is in NP and NP-HARD, it is a NP complete problem .

Now , ∏ is a submodule call from the main S system.
S >

Thus our system can be stated to be NP-complete

Verification of this is linear , but derivation is not.

DNA Coverage

This news ran today[13-09-2011]

Lighting the innovation lamp in enterprising young minds

Rajesh Rao

At the Uniken Innovation lecture series 2011, organised recently in the city, three best innovative ideas were selected out of 28 presentations given by students of city institutes and entrepreneurs. The day-long lecture series was organised by Uniken in association with Shantanurao Kirloskar Chair (University of Pune), College of Engineering Pune (COEP) and Centre for Innovation, Incubation and Entrepreneurship (CIIE) IIM-Ahmedabad, on September 8 at COEP campus in the city. World-renowned American cryptographer and one of the pioneers of public-key cryptography, Whitfield Diffie, motivated students with his talks at an interactive session conducted at the end of the event. Chief innovation officer and CEO of Uniken, Sanjay Deshpande, was also present on the occasion. “Education in India is more about understanding the mechanics of examination, rather than real application of knowledge. For innovation to happen on campus, we have to bring the concept of freedom in academics,” said Deshpande. The lecture series, based on the theme ‘students as innovators’, was organised to encourage students to become innovators and entrepreneurs. Students from city institutes and entrepreneurs were invited to present and showcase their innovative ideas, final year projects and ready-to-market products. Out of the 28 presentations that were showcased, three best innovative ideas were selected. Projects of teams of final year students, Abhijit Navale and Nikhil Bhagwat (JSPM’s Jayawantrao Sawant College of Engineering) and Puneet Sutar, Vipul Amler, Prajakta Tambe, Neeraj Thakur (Pune Institute of Computer Technology) along with city-based entrepreneur Koustubh Yadre, were selected as the top three winners. A technology innovation, product design and engineering firm, Uniken is headquartered in Florida (US), with its innovation centre in Pune. Uniken has partnered with CIIE (IIM-Ahmedabad) to set up a national innovation network across the campuses in India. Internet security expert, Whitfield Diffie, is on the scientific advisory board of Uniken. The three best innovative ideas: Abhijit Navale and Nikhil Bhagwat (Final year students of electronic and telecommunication) Their project helps paralysed people to communicate with others using infra-red (IR) sensors installed on a goggle to detect facial muscle movements of the patient. The IR sensors send signals to a microcontroller, which uses the basic concept of interrupt generation to make words. A cost-effective and easy-to-operate mode for paralysed people to communicate, it does not require any touch interface. Puneet Sutar, Vipul Amler, Prajakta Tambe, Neeraj Thakur (Final year students of computer science) They have created Git-Torrent distributed version control system, which provides re-visioning that keeps track of the history of all changes done to a file. Targeted towards large data producers such as animation industry, the system uses Git-Torrent to help them work easily in remote locations. The innovative idea envisages providing a solution for re-visioning large data. Koustubh Yadre (Entrepreneur) As an alternative to disposing household wet garbage, he has designed a portable compost tumbler. Adding daily waste in the tumbler along with a waste binder turns the waste into compost within 15 days, which can be used for gardening. All types of bio-degradable material like food and vegetable remains, fruit peels, flowers and garden waste can be used. The specially designed tumbler is UV resistant and thermo controlled drum. The compost culture used are beneficial fungus and bacteria. The tumbler also helps in keeping the waste free from rodents, bad odour and flies.

One more news coverage was a day before, on page 4 DNA Pune, news: American Expert…

External Links:


Page 5.


Page 4.



We’ve seen many problems that can be solved  quickly (in close to linear time ,  or at least a time that is some small polynomial function of the input size ~ O(nk)).

NP-completeness is states something differently. It gives evidence that many important problems can’t be solved quickly.

Why should we care?

These NP-complete problems really come up all the time. Knowing they’re hard lets you to opt for alternatives instead of searching an overly optimized solution:

1.Use a heuristic. {solving fractions of problem in good O(n)}

2.Solve the problem approximately instead of exactly

3.Use an exponential time solution anyway.

4.Choose a better abstraction.- Add important details.

Classification of problems

The subject of (computational complexity theory) is dedicated to classifying problems by how hard they are. There are many different classifications; some of the most common and useful are the following.

1.P. Problems that can be solved in polynomial time. (“P” stands for polynomial.)

2.NP. This stands for “nondeterministic polynomial time” where nondeterministic just means guessing a solution. A problem is in NP if you can quickly (in polynomial time) test whether a solution is correct (without worrying about how hard it might be to find the solution). Problems in NP are still relatively easy: if only we could guess the right solution, we could then quickly test it.

NP does not stand for “non-polynomial”. There are many complexity classes that are much harder than NP.

1.PSPACE. Problems that can be solved using a reasonable amount of memory (again defined formally as a polynomial in the input size) without regard to how much time the solution takes.

2.EXPTIME. Problems that can be solved in exponential time. This class contains most problems you are likely to run into, including everything in the previous three classes.

3.Undecidable. For some problems, we can prove that there is no algorithm that always solves them, no matter how much time or space is allowed. One very uninformative proof of this is based on the fact that there are as many problems as there real numbers, and only as many programs as there are integers, so there are not enough programs to solve all the problems. But we can also define explicit and useful problems which can’t be solved.

NP-completeness theory is concerned with the distinction between the first two classes, P and NP.

Examples of problems in different classes

Example 1: Long simple paths.

A simple path in a graph is just one without any repeated edges or vertices. To describe the problem of finding long paths in terms of complexity theory, we need to formalize it as a yes-or-no question: given a graph G, vertices s and t, and a number k, does there exist a simple path from s to t with at least k edges? A solution to this problem would then consist of such a path.

Why is this in NP? If you’re given a path, you can quickly look at it and add up the length, double-checking that it really is a path with length at least k. This can all be done in linear time, so certainly it can be done in polynomial time.

However we don’t know whether this problem is in P; I haven’t told you a good way for finding such a path (with time polynomial in m,n, and K).

There are algorithms that solve the problem; for instance, list all 2^m subsets of edges and check whether any of them solves the problem. But  there is no algorithm that runs in polynomial time.

Example 2: Cryptography.

Suppose we have an encryption function e.g.


The “RSA” encryption works by performing some simple integer arithmetic on the code and the key, which consists of a pair (p,q) of large prime numbers. One can perform the encryption only knowing the product pq; but to decrypt the code you instead need to know a different product, (p-1)(q-1).

A standard assumption in cryptography is the “known plaintext attack”: we have the code for some message, and we know (or can guess) the text of that message. We want to use that information to discover the key, so we can decrypt other messages sent using the same key.

Formalized as an NP problem, we simply want to find a key for which code=RSA(key,text). If you’re given a key, you can test it by doing the encryption yourself, so this is in NP.

The hard question is, how do you find the key? For the code to be strong we hope it isn’t possible to do much better than a brute force search.

Example 3: Chess

Example 4: Halting problem.

Suppose you  have written your program, and start to run it. After five minutes, it is still going. Does this mean it’s in an infinite loop, or is it just slow?

It would be convenient if your compiler could tell you that your program has an infinite loop. However this is an undecidable problem: there is no program that will always correctly detect infinite loops.

Some people have used this idea as evidence that people are inherently smarter than computers, since it shows that there are problems computers can’t solve. Here’s an example:



int x = 3;

for (;;) {

for (int a = 1; a <= x; a++)

for (int b = 1; b <= x; b++)

for (int c = 1; c <= x; c++)

for (int i = 3; i <= x; i++)

if(pow(a,i) + pow(b,i) == pow(c,i))





This program searches for solutions to Fermat’s last theorem.

Problems of complexity theory

The most famous open problem in theoretical science is whether P = NP.

“If it’s always easy to check a solution, should it also be easy to find the solution?”

So how does this theory tell us how hard any particular problem is to solve?


The theory of NP-completeness is a solution to the practical problem of applying complexity theory to individual problems. NP-complete problems are defined in a precise sense as the hardest problems in P. Even though we don’t know whether there is any problem in NP that is not in P, we can point to an NP-complete problem and say that if there are any hard problems in NP, that problems is one of the hard ones.

So if we believe that P and NP are unequal, and we prove that some problem is NP-complete, we should believe that it doesn’t have a fast algorithm.

Most problems we’ve looked at in NP turn out either to be in P or NP-complete. So the theory of NP-completeness turns out to be a good way of showing that a problem is likely to be hard, because it applies to a lot of problems. But there are problems that are in NP, not known to be in P, and not likely to be NP-complete; for instance the code-breaking example .


Formally, NP-completeness is defined in terms of “reduction” which is just a complicated way of saying one problem is easier than another.

We say that A is easier than B, and write A < B, if we can write down an algorithm for solving A that uses a small number of calls to a subroutine for B.

Then if A < B, and B is in P, so is A: we can write down a polynomial algorithm for A by expanding the subroutine calls to use the fast algorithm for B.

So “easier” in this context means that if one problem can be solved in polynomial time, so can the other. It is possible for the algorithms for A to be slower than those for B, even though A < B.

As an example, consider the Hamiltonian cycle problem. Does a given graph have a cycle visiting each vertex exactly once? Here’s a solution, using longest path as a subroutine:

for each edge (u,v) of G

if there is a simple path of length n-1 from u to v

return yes      // path + edge form a cycle

return no

This algorithm makes m calls to a longest path subroutine, and does O(m) work outside those subroutine calls, so it shows that Hamiltonian cycle < longest path.

Cook’s Theorem

We are now ready to formally define NP-completeness. We say that a problem A in NP is NP-complete when, for every other problem B in NP, B < A.

This seems like a very strong definition. After all, the notion of reduction we’ve defined above seems to imply that if B < A, then the two problems are very closely related; for instance Hamiltonian cycle and longest path are both about finding very similar structures in graphs. Why should there be a problem that closely related to all the different problems in NP?

Theorem: An NP-complete problem exists.

We prove this by example. One NP-complete problem can be found by modifying the halting problem .

Bounded halting. This problem takes as input a program X and a number K. The problem is to find data which, when given as input to X, causes it to stop in at most K steps.

To be precise, this needs some more careful definition: what language is X written in? What constitutes a single step? Also for technical reasons K should be specified in unary notation, so that the length of that part of the input is K itself rather than O(log K).

For reasonable ways of filling in the details, this is in NP: to test if data is a correct solution, just simulate the program for K steps. This takes time polynomial in K and in the length of program.

To finish the proof that this is NP-complete, we need to show that it’s harder than anything else in NP. Suppose we have a problem A in NP. This means that we can write a program PA that tests solutions to A, and halts within polynomial time p(n) with a yes or no answer depending on whether the given solution is really a solution to the given problem. We can then easily form a modified program PA’ to enter an infinite loop whenever it would halt with a no answer. If we could solve bounded halting, we could solve A by passing PA’ and p(n) as arguments to a subroutine for bounded halting. So A < bounded halting. But this argument works for every problem in NP, so bounded halting is NP-complete.

How to prove NP-completeness in practice

The proof above of NP-completeness for bounded halting is great for the theory of NP-completeness, but doesn’t help us understand other more abstract problems such as the Hamiltonian cycle problem.

Most proofs of NP-completeness don’t look like the one above; it would be too difficult to prove anything else that way. Instead, they are based on the observation that if A < B and B < C, then A < C.

As a consequence of this observation, if A is NP-complete, B is in NP, and A < B, B is NP-complete. In practice that’s how we prove NP-completeness: We start with one specific problem that we prove NP-complete, and we then prove that it’s easier than lots of others which must therefore also be NP-complete.

So e.g. since Hamiltonian cycle is known to be NP-complete, and Hamiltonian cycle < longest path, we can deduce that longest path is also NP-complete.

What steps do we have to take to prove a problem P is NP-Complete?

1. Prove P Ε NP

2. Pick a known NP-Complete problem Q

3.1Reduce Q to P

3.2 Describe a transformation that maps instances of Q to instances of P, => “yes” for P = “yes” for Q

3.3 Prove the transformation works

4. Prove it runs in polynomial time

{This is a modified version of a lecture document, which was previously published here.}

Module Compilation

Following is another of my old dumps while showing installation and use of three types of Kernel Modules: Simple, One returning from /proc and another a /dev

You can find the modules directories listed below in this compilation
sward@sward:~$ cd /boot/
sward@sward:/boot$ sudo update-grub
[sudo] password for sward:
Generating grub.cfg …
Found background image: /usr/share/images/desktop-base/desktop-grub.png
Found linux image: /boot/vmlinuz-
Found initrd image: /boot/initrd.img-
Found linux image: /boot/vmlinuz-2.6.32-5-686
Found initrd image: /boot/initrd.img-2.6.32-5-686
Found Windows 7 (loader) on /dev/sda1

#After Compiling and rebooting
sward@sward:~$ uname -r

sward@sward:~/module/hello_printk$ make
make -C /lib/modules/ M=/home/sward/module/hello_printk modules
make[1]: Entering directory `/home/sward/kernel/linux-′
Building modules, stage 2.
MODPOST 1 modules
make[1]: Leaving directory `/home/sward/kernel/linux-′

sward@sward:~/module/hello_printk$ sudo insmod ./hello_printk.ko
sward@sward:~/module/hello_printk$ dmesg | tail
[ 36.299929] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0
[ 41.163597] lp: driver loaded but no devices found
[ 41.386104] ppdev: user-space parallel port driver
[ 62.467260] sshd (2235): /proc/2235/oom_adj is deprecated, please use /proc/2235/oom_score_adj instead.
[ 88.182166] FAT: utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ 393.599872] Hello, world!
[ 438.178403] Goodbye, world!
[ 1590.507930] Hello, world!
[ 3249.532024] Goodbye, world!
[ 3253.303910] Hello, world!

sward@sward:~/module/hello_printk$ sudo rmmod hello_printk
sward@sward:~/module/hello_printk$ dmesg | tail
[ 41.163597] lp: driver loaded but no devices found
[ 41.386104] ppdev: user-space parallel port driver
[ 62.467260] sshd (2235): /proc/2235/oom_adj is deprecated, please use /proc/2235/oom_score_adj instead.
[ 88.182166] FAT: utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ 393.599872] Hello, world!
[ 438.178403] Goodbye, world!
[ 1590.507930] Hello, world!
[ 3249.532024] Goodbye, world!
[ 3253.303910] Hello, world!
[ 3286.511989] Goodbye, world!

##Using /proc to print
sward@sward:~/module/hello_proc$ sudo insmod ./hello_proc.ko
sward@sward:~/module/hello_proc$ cat /proc/hello_world
Hello, world!

##Using /dev to print,Yeah thats the device
sward@sward:~/module/hello_dev$ sudo insmod hello_dev.ko
sward@sward:~/module/hello_dev$ sudo cat /dev/hello
Hello, world!

##For module licences:

* The following license idents are currently accepted as indicating free
* software modules
* “GPL” [GNU Public License v2 or later]
* “GPL v2” [GNU Public License v2]
* “GPL and additional rights” [GNU Public License v2 rights and more]
* “Dual BSD/GPL” [GNU Public License v2
* or BSD license choice]
* “Dual MIT/GPL” [GNU Public License v2
* or MIT license choice]
* “Dual MPL/GPL” [GNU Public License v2
* or Mozilla license choice]
* The following other idents are available
* “Proprietary” [Non free products]
* There are dual licensed components, but when running with Linux it is the
* GPL that is relevant so this is a non issue. Similarly LGPL linked with GPL
* is a GPL combined work.
* This exists for several reasons
* 1. So modinfo can show license info for users wanting to vet their setup
* is free
* 2. So the community can ignore bug reports including proprietary modules
* 3. So vendors can do likewise based on their own policies