terms

NFV and VNF

First, what is a network function? The term typically refers to some component of a network infrastructure that provides a well-defined functional behavior, such as intrusion detection, intrusion prevention or routing.

Historically, we have deployed such network functions as physical appliances, where software is tightly coupled with specific, proprietary hardware. These physical network functions need to be manually installed into the network, creating operational challenges and preventing rapid deployment of new network functions.

A VNF, on the other hand, refers to the implementation of a network function using software that is decoupled from the underlying hardware. This can lead to more agile networks, with significant Opex and Capex savings.

In contrast, NFV typically refers to the overarching principle or concept of running software-defined network functions, independent of any specific hardware platform, as well as to a formal network virtualization initiative led by some of the world’s biggest telecommunications network operators. In conjunction with ETSI, these companies aim to create and standardize an overarching, comprehensive NFV framework, a high-level illustration of which appears below. Notice the diagram highlights VNFs that are deployed on top of NFV infrastructure, which may span more than one physical location.

To summarize, NFV is an overarching concept, while a VNF is building block within ETSI’s current NFV framework.

 

Application discovery and understanding (ADU) is the process of automatically analyzing artifacts of a software application and determining metadata structures associated with the application in the form of lists of data elements and business rules. The relationships discovered between this application and a central metadata registry is then stored in the metadata registry itself.

P2P and PPP

Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the application. They are said to form a peer-to-peer network of nodes.

Point-to-Point Protocol (PPP) is a data link (layer 2) protocol used to establish a direct connection between two nodes. It connects two routers directly without any host or any other networking device in between.It can provide connection authentication, transmission encryption (using ECP, RFC 1968), and compression.

cryptographic hash function

A cryptographic hash function is a special class of hash function that has certain properties which make it suitable for use in cryptography. It is a mathematical algorithm that maps data of arbitrary size to a bit string of a fixed size (a hash function) which is designed to also be a one-way function, that is, a function which is infeasible to invert. The only way to recreate the input data from an ideal cryptographic hash function’s output is to attempt a brute-force search of possible inputs to see if they produce a match. Bruce Schneier has called one-way hash functions “the workhorses of modern cryptography”.[1] The input data is often called the message, and the output (the hash value or hash) is often called the message digest or simply the digest.

digital signature

A digital signature is a mathematical scheme for demonstrating the authenticity of a digital message or documents. A valid digital signature gives a recipient reason to believe that the message was created by a known sender (authentication), that the sender cannot deny having sent the message (non-repudiation), and that the message was not altered in transit (integrity).

Digital signatures employ asymmetric cryptography. In many instances they provide a layer of validation and security to messages sent through a nonsecure channel: Properly implemented, a digital signature gives the receiver reason to believe the message was sent by the claimed sender. Digital seals and signatures are equivalent to handwritten signatures and stamped seals.Digitally signed messages may be anything representable as a bitstring: examples include electronic mail, contracts, or a message sent via some other cryptographic protocol.

  • key generation algorithm selects a private key uniformly at random from a set of possible private keys and a corresponding public key.
  • A signing algorithm that, given a message and a private key, produces a signature.
  • A signature verifying algorithm that, given the message, public key and signature, either accepts or rejects the message’s claim to authenticity.

 

When encrypting, you use their public key to write message and they use their private key to read it. Confidentiality

When signing, you use your private key to write message’s signature, and they use your public key to check if it’s really yours. data integrity, message authentication, and non-repudiation.

Symmetric-key algorithms[1] are algorithms for cryptography that use the same cryptographic keys for both encryption of plaintext and decryption of ciphertext. The keys may be identical or there may be a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link.[2] This requirement that both parties have access to the secret key is one of the main drawbacks of symmetric key encryption, in comparison to public-key encryption (also known as asymmetric key encryption).[3]

https://en.wikipedia.org/wiki/Transport_Layer_Security

 

 

Scope Resolution Operator

In computer programming, the scope of a name binding – an association of a name to an entity, such as a variable – is the part of a computer program where the binding is valid: where the name can be used to refer to the entity.

In computer programming, scope is an enclosing context where values and expressions are associated. The scope resolution operator helps to identify and specify the context to which an identifier refers, particularly by specifying a namespace. The specific uses vary across different programming languages with the notions of scoping. In many languages the scope resolution operator is written “::“.

Wireless

DB

dBm (sometimes dBmW or decibel-milliwatts) is an abbreviation for the power ratio in decibels (dB) of the measured power referenced to one milliwatt (mW).

The decibel (dB) is a logarithmic unit used to express the ratio of two values of a physical quantity. One of these values is often a standard reference value, in which case the decibel is used to express the level[a] of the other value relative to this reference.

wireless access point (WAP) is a networking hardware device that allows a Wi-Fi compliant device to connect to a wired network. The WAP usually connects to a router (via a wired network) as a standalone device, but it can also be an integral component of the router itself. A WAP is differentiated from a hotspot, which is the physical location where Wi-Fi access to a WLAN is available.

Video Outlets

https://www.distribber.com/faqs

video producer deliver video to platforms, such as iTunes, Google Play, Hulu etc., pay some fee, then split the revenue with the platforms.

 

SSL

handshake:

server send public key and other info to client

client send pre-master info encrypted by public key to server, which decrypt it with private key

both start to generate the Master Secret and session keys based on the Pre-Master Secret.

following data exchange are both using session key.

Advertisements
terms

C++

.hpp

It’s not perfect, and you would usually resort to techniques like the Pimpl Idiom to properly separate interface and implementation, but it’s a good start.

 

A compilation in C++ is done in 2 major phases:

  1. The first is the compilation of “source” text files into binary “object” files: The CPP file is the compiled file and is compiled without any knowledge about the other CPP files (or even libraries), unless fed to it through raw declaration or header inclusion. The CPP file is usually compiled into a .OBJ or a .O “object” file.
  2. The second is the linking together of all the “object” files, and thus, the creation of the final binary file (either a library or an executable).

Where does the HPP fit in all this process?

A poor lonesome CPP file…

The compilation of each CPP file is independent from all other CPP files, which means that if A.CPP needs a symbol defined in B.CPP, like:

// A.CPP
void doSomething()
{
   doSomethingElse(); // Defined in B.CPP
}

// B.CPP
void doSomethingElse()
{
   // Etc.
}

It won’t compile because A.CPP has no way to know “doSomethingElse” exists… Unless there is a declaration in A.CPP, like:

// A.CPP
void doSomethingElse() ; // From B.CPP

void doSomething()
{
   doSomethingElse() ; // Defined in B.CPP
}

Then, if you have C.CPP which uses the same symbol, you then copy/paste the declaration…

COPY/PASTE ALERT!

Yes, there is a problem. Copy/pastes are dangerous, and difficult to maintain. Which means that it would be cool if we had some way to NOT copy/paste, and still declare the symbol… How can we do it? By the include of some text file, which is commonly suffixed by .h, .hxx, .h++ or, my preferred for C++ files, .hpp:

// B.HPP (here, we decided to declare every symbol defined in B.CPP)
void doSomethingElse() ;

// A.CPP
#include "B.HPP"

void doSomething()
{
   doSomethingElse() ; // Defined in B.CPP
}

// B.CPP
#include "B.HPP"

void doSomethingElse()
{
   // Etc.
}

// C.CPP
#include "B.HPP"

void doSomethingAgain()
{
   doSomethingElse() ; // Defined in B.CPP
}

How does include work?

Including a file will, in essence, parse and then copy-paste its content in the CPP file.

For example, in the following code, with the A.HPP header:

// A.HPP
void someFunction();
void someOtherFunction();

… the source B.CPP:

// B.CPP
#include "A.HPP"

void doSomething()
{
   // Etc.
}

… will become after inclusion:

// B.CPP
void someFunction();
void someOtherFunction();

void doSomething()
{
   // Etc.
}

One small thing – why include B.HPP in B.CPP?

In the current case, this is not needed, and B.HPP has the doSomethingElse function declaration, and B.CPP has the doSomethingElse function definition (which is, by itself a declaration). But in a more general case, where B.HPP is used for declarations (and inline code), there could be no corresponding definition (for example, enums, plain structs, etc.), so the include could be needed if B.CPP uses those declaration from B.HPP. All in all, it is “good taste” for a source to include by default its header.

Conclusion

The header file is thus necessary, because the C++ compiler is unable to search for symbol declarations alone, and thus, you must help it by including those declarations.

One last word: You should put header guards around the content of your HPP files, to be sure multiple inclusions won’t break anything, but all in all, I believe the main reason for existence of HPP files is explained above.

 

C++

NDNSIM

cd <ns-3-folder>
./waf configure --enable-examples
./waf

when run to this,

need to modify the configure command to use MacPorts version of python:

cd <ns-3-folder>
sudo port select python python27
./waf configure --with-python=/opt/local/bin/python2.7 --enable-examples
./waf

 

Defining a topology in txt file, with

experimental extended versions of TopologyReader classes: AnnotatedTopologyReader and RocketfuelWeightsReader.

 

if only one route can be chosen, then use

 Choosing forwarding strategy

Otherwise, use

 Set BestRoute strategy

 

NDNSIM