Skip to main content

C++ Interface

In this tutorial we will discuss interfaces in C++ and why one would be of benefit.  First off, an interface serves as a means to have a commonality of usage for varies objects.  What I mean by that is that say you have a class representing a basketball and a class that represents a soccer ball, it is understood that both are used for playing but how they are played with is different.  That difference of how they are used for playing is where interfaces come in.

An Interface provides a common way of using a class of type basketball and a class of type soccer ball without having will being able to refer to each different object(basketball or soccer ball) with the same interface.

Okay okay, this is confusing so let us take a couple of examples.

Without an interface, to have an object of basketball and soccer ball perform the same behavior, playing, you could do this:


class BasketBall{
public:
     void play(){
          cout << "playing with basketball" << endl;
     }
};
class SoccerBall{
public:
     void play(){
            cout << "playing with basketball" <<endl;
     }
}
int main(){
     BasketBall* bbball = new BasketBall();
     bball->play();
     delete bball;

     SoccerBall *sball = new SoccerBall();
     sball->play();
     delete sball;
     return 0;
}

From the preceding, we declared two class and declare two objects: bball and sball.  Take note that the two objects have a common usage.  That is all well and good, but the cost is that if you have a function that is used to execute the play method of these two functions, then you have to declare two functions of the type of the two respective objects.  


class BasketBall{
public:
     void play(){
          cout << "playing with basketball" << endl;
     }
};
class SoccerBall{
public:
     void play(){
            cout << "playing with basketball" <<endl;
     }
}
void executeBasketBallPlay(BasketBall* ball){
     ball->play();
}
void executeSoccerPlay(SoccerBall* ball){
     ball->play();
}
int main(){
     BasketBall* bbball = new BasketBall();
    executeBasketBallPlay(bball);
     delete bball;

     SoccerBall *sball = new SoccerBall();
     executeSoccerPlay(sball);
     delete sball;
     return 0;
}

Here we see that it is inefficient to have two separate functions that will execute essentially the same type of behavior from different class types.  So, the prudent thing to do would be to design a more efficient way of interacting with our balls (hey, that is kind of funny).  So, by now we have noted that our two class a common interface: the play method.  So we need to design a way to have other functions or classes in our code to respect this common interface/commonality between these two balls.

So here we go.

//here we define an interface, it is nothing more than a class with only pure
//virtual functions, denoted by the virtual keyword and the "=0" suffix
class PlayInterface {
public:

     virtual void play() = 0;
};


class BasketBall : public PlayInterface {
public:
     void play(){
          cout << "playing with basketball" << endl;
     }
};
class SoccerBall : public PlayInterface {
public:
     void play(){
            cout << "playing with basketball" <<endl;
     }
}

void executePlay(PlayInterface* ball){
     ball->play();
}

int main(){
     BasketBall* bbball = new BasketBall();
     executePlay (bball);
     delete bball;

     SoccerBall *sball = new SoccerBall();
     executePlay(sball);
     delete sball;
     return 0;
}

Now we have significantly increased the effectiveness of our code because we have only on method that will execute a simple command: playing with balls.  What this does is that executePlay takes a valid object of type of any type so long as it derives or implements PlayInterface, if it does then it executes it.

This is beneficial in many ways but namely because it allows use to not care about type matching and allows us to focus on common behaviors of objects.



Comments

Popular posts from this blog

Creating local variables In Assembly

Lets go over how to create local variables inside of a pure assembly source code. Much like always, you will start with a *.asm file that looks like this: source code SECTION .data SECTION .bss SECTION .text global main ;make main available to operating system(os) main: ;create the stack frame push ebp push mov ebp, esp ;destroy the stack frame mov esp, ebp pop ebp ret So, the above is the general layout of an NASM source file.  Our goal here is to create a local variable inside of the main method.  The only way to create a local variable is by using the stack.  Why?  Because we can only declare variable in storage locations and the only available storage locations are: text, bss, and data.  However, text section is only for code, so it is out of the question.  The bss and data sections are appealing, but to declare our "local" variable in these sections will defeat the purpose of these varia...

NASM Programming

Many of you, if you are like me, might be interested in how assembly works.  You will be very surprised that assembly is very very easy, especially after you write a couple of simple programs.  But don't get me wrong, you will be frustrated at first, however that frustration, if you channel it right, will lead to serious life long learning and will give you a deeper appreciation of the beauty of assembly. For more tutorial on assembly and visualization of these information, visit my youtube channel . Okay so lets get started. We will be using Netwide Assembler (NASM) to write our program. The general format of NASM file is this: ;This is a comment SECTION .data ;declare variable here SECTION .bss ;declare actual, dynamic variable SECTION .text ;where your program code/assembly code lives ; Working with Data Section In your .data section, you can declare variables like this: nameOfVariable: db 32 ;this declares a variable names nameOfVariable...

Introduction to Linux Kernel Programming

The Linux kernel is designed as a mixture of a monolithic binary image and a micro-kernel.  This combination allows for the best of both worlds.  On the monolithic side, all the code for the kernel to work with the user and hardware is already installed and ready for fast access, but the downside is that to add more functionality you need to rebuild the entire kernel.   In a different manner, a micro-kernel is composed of small pieces  of code that can be meshed today and more pieces can be added or removed as needed.  However, the downside to micro-kernel is a slower performance. Adding a module to the Kernel Linux is organized as both monolithic, one huge binary, and micro-kernel, as you can add more functionality to it.  The process of adding more functionality to the kernel can be illustrated by the crude image to the left. The process begins by using the command insmod with the name of the kernel module you want (which usually ends with ex...