Hardware Polymorphism in the x86 arch

Polymorphism is defined as “having multiple forms”, based on the words of “poly” from the Greek word meaning multiple, and “morphism” from the Greek word which means form, joining together the meaning of multiple forms (Ravichandran, 2001).

Polymorphism is pretty much common in high level programming languages, having different ways of application (The Beginners Book, 2013):

  • For variables with may have the capability of having different forms: for example, the variable ID may have the data type of a String or may be of the Integer data Type;

  • For functions, it may assume different forms. With a parameter of Integer the function may be responsible for looking up the user by the ID, and with the parameter of a String format, it may look for the user name. This is usually called method overloading, which allows to have multiple methods with same name, but with a different argument list.

My subject this week is to study how polymorphism could be implemented in the hardware space, more specifically in memory and data cells.

Let´s get a simple example of what can be done in the Ruby programming language:

“Before we get any further, we should make sure we understand the difference between numbers and digits. 12 is a number, but ’12’ is a string of two digits.

Let’s play around with this for a while:

puts 12 + 12

puts ’12’ + ’12’

puts ’12 + 12′

(results)

24

1212

12 + 12

How about this:

puts 2 * 5

puts ‘2’ * 5

puts ‘2 * 5’

(results)

10

22222

2 * 5″

(Chris Pine, 2006)

As shown above the results are based on data distinguishing, string data types are concatenated, numbers are calculated and the hybrid operations brought up expected language specific behavior while working with data types and operations. In the hardware level you cannot distinguish data based on its type, because the processor executes or process data based on its cycles.

Since data instructions stored in the RAM memory are not distinguished, because there is no metadata on the data cells, how could we add this to the data and make this possible to differ what kind of information is being processed? And one more question comes out, would that be viable in terms of costs in processing and costs in implementing such solution in hardware?

One imaginary solution to this problem is the data to carry itself its data type. In the case of representing a positive integer, for instance there would be one byte which would carry this information. The same occur with the digit signal, which in mathematics we represent negative numbers with the minus identifier ‘-‘, but in computing the negative symbol is represented by one bit, which is also usually the most significant bit: 0 for a positive number or positive zero, and 1 is for a negative number or negative zero (Tanenbaum, 2005).

The solution to identify a collection of bits which represent one data, called meta data: is simply defined as “data that describes other data”. It can be compared to the signal, which is carried with the bit collection, which would identify the data type for it.

Two more things should be mentioned in this article: the need of the processor to implement a type-check solution, the need of the processor to support overloaded instructions.

What would be necessary to implement if I would like to call the ADD instruction by passing an integer and a floating point as arguments? This answer is simple: by just overloading this method allowing this to receive different parameters.


In C++ for instance, it can be achieved by using the concepts of Early binding, which happens during compilation, where the compiler defines what function to call based on the argument list, or the function return type. This is the default method used in C++. It can be also achieved by Late binding, which the functions are chosen at the execution time, or the Pure Virtual Functions, which only has the function declaration and the subclasses should implement it (Ravichandran, 2001).

In the case of the processor, it would be necessary to implement the function overloading in the execution time, considering that the processor executes dynamic operations and does not work with compiled code.

If we consider the data path of a typical Von Neumman machine, we will realize that all the imagined scenario proposed above would be utopia in the x86 computer architecture we have today. The figure below shows the data path for a basic addition operation (Tanenbaum, 2005).

xid-83832526_6
Figure 1 Data path of a typical Von Neummann machine

Ravichandran, D. R, 2001. Introduction to Computers and Communication. 1st ed. New Delhi: Tata McGraw-Hill Education.

The Beginners Book. 2013. Polymorphism in Java – Method Overloading and Overriding. [ONLINE] Available at: http://beginnersbook.com/2013/03/polymorphism-in-java/. [Accessed 26 December 15].

Tanenbaum, Andrew S., 2005. STRUCTURED COMPUTER ORGANIZATION. 5th ed. Amsterdam, The Netherlands: Pearson Education, Inc.

Chris Pine. 2006. Learn to Program . [ONLINE] Available at: https://pine.fm/LearnToProgram/chap_02.html. [Accessed 26 December 15].

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s