conceptos de data center

151
Razón social 00.00.2015 Conceptos de Data Center gCTO May 2017

Upload: others

Post on 18-Jul-2022

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Conceptos de Data Center

Razón social

00.00.2015

Conceptos de Data Center

gCTO May 2017

Page 2: Conceptos de Data Center

2

01Facilities del

Data Center

Page 3: Conceptos de Data Center

3

Facilities

Los DataCenters son ambientes controlados donde:Se amacenan recursos críticos de procesamiento, Bajo una gestión centralizada,

Eso permite a la Empresa:Operar sus requerimientos de negocioY así generar los siguientes beneficios:

1) Continuidad del negocio2) Seguridad3) Consolidación de servidores y aplicaciones4) Consolidación de almacenamiento

Page 4: Conceptos de Data Center

4

Impulsores de Negocio:

Agilidad

Resistencia

Modularidad y escalabilidad

Confiabilidad y disponibilidad

Sostenibilidad

Habilidad de moverse rápidamente

Preparada para recuperarse rápidamente de una falla de equipo ó de un desastre natural

Ampliación de la infraestructura rápida y fácil

Aplicando las mejores prácticas de diseño ecológico, construcción y operaciones de DataCenters para reducir los impactos ambientales.

Confiabilidad: Capacidad del equipo para realizar una función determinadaDisponibilidad: Capacidad de un elemento para estar en un estado como para poder realizar una función requerida.

TCO

Total Cost ownership: Implica el Costo total del ciclo de vida del CAPEX (terreno, construcción, diseño ecológico y armado del DataCenter)Y del OPEX (por ejemplo costos de energía)

Page 5: Conceptos de Data Center

5

Pasillos Fríos y Pasillos CalientesSe trata de una técnica para acomodar los racks de los equipos en un DataCenter:

para mejorar y optimizar las capacidades de refrigeración del mismo.

Beneficios:• Incrementar la capacidad y la eficiencia del sistema de enfriamiento

afectando la RAT (“Return Air Temperature”)

• Proporcionar temperaturas del aire de entrada de los equipos de IT más predecibles y confiables.

• Mejorar la redundancia en el sistema de enfriamiento basado en filas extendiendo la esfera de influencia de las unidades de enfriamiento

Page 6: Conceptos de Data Center

6

Pasillos Fríos y Pasillos Calientes

Page 7: Conceptos de Data Center

7

Pasillos Fríos y Pasillos Calientes

• Básicamente el sistema inyecta aire frio por los frentes del rack • El aire frio pasa por los equipamientos que expulsan el mismo con una temperatura mayor.• El aire caliente pesa menos Entonces asciende hacia el techo Allí es extraído y llevado

A la sala de enfriamiento para volver por los ductos de aire frío.

• Este sistema nos asegura que los equipos siempre tomen aire frío

• Con lo cual aumenta la eficiencia del sistema de refrigeración

• Además existen métodos para forzar y realizar sectores estancos donde ambos pasillos están mucho mejor aislados.

Page 8: Conceptos de Data Center

8

PUE: Power Usage EffectivenessPower Usage Effectiveness (Efectividad del uso de la energía) es:

Una métrica Utilizada para determinar la eficiencia energética del datacenter

Representa la relación de potencia que se está consumiendo del proveedor de energíaY la que efectivamente consumen los equipos de procesamiento almacenados en el datacenter

𝑃𝑈𝐸 =

El PUE promedio de un datracebter debería estar cercano a 1.6

(𝐸𝑛𝑒𝑟𝑔𝑖𝑎 𝑡𝑜𝑡𝑎𝑙 𝑐𝑜𝑛𝑠𝑢𝑚𝑖𝑑𝑎)

(𝐸𝑛𝑒𝑟𝑔𝑖𝑎 𝑐𝑜𝑛𝑠𝑢𝑚𝑖𝑑𝑎 𝑝𝑜𝑟 𝑒𝑞𝑢𝑖𝑝𝑎𝑚𝑖𝑒𝑛𝑡𝑜)

Page 9: Conceptos de Data Center

9

Datacenter Tiering

• Los standards de datacenters evalúan la calidad y confiabilidad del mismo para alojar equipamientos.• El Uptime Institute usa una especie de ranking basado en Tiers para determinar dicha confiabilidad.• Se pueden definir esos 4 Tiers de la siguiente manera:

Page 10: Conceptos de Data Center

10

Datacenter Tiering

• TIER I:• 99.671% uptime, permitiendo 28,8hs de downtime por año• Sin redundancia

• TIER II:• 99.749% uptime, permitiendo 22hs de downtime por año• Redundancia parcial en alimentación y refrigeración

• TIER III:• 99.982% uptime, permitiendo 1,6hs de downtime por año• N+1 fault tolerant que pueda proveer hasta 72 horas de protección contra cortes de energía

• TIER IV:• 99.995% uptime, permitiendo 26.3 minutos de downtime por año• 2(N+1) fault tolerant que pueda proveer hasta 96 horas de protección contra cortes de

energía

Page 11: Conceptos de Data Center

11

Datacenter Tiering

Page 12: Conceptos de Data Center

12

Datacenter Tiering

• N+1: Redundancia en paralelo• Configuración en la que 2 UPSs soportan la carga de los sistemas a la vez

Siendo cada una capaz de soportar toda la carga por completo

• Es una de las configuraciones más habituales • Requiere que las UPSs estén sincronizadas

y habitualmente que sean del mismo fabricante

• El diseño tiene puntos únicos de fallo• Tanto en la alimentación de las UPSs• Como en la distribución hacia los sistemas

• No es tolerante a fallos• Esto puede ser matizable según la implementación en tier 2, 3 y 4 según la TIA-942

Page 13: Conceptos de Data Center

13

Datacenter Tiering

• 2(N+1): Doble Redundancia en paralelo• Dos configuraciones de redundancia en paralelo

alimentando simultáneamente el equipo crítico• Requiere al menos Cuadriplicar la potencia eléctrica necesaria para alimentar los

sistemas informáticos ya que cada una de las 4UPSs mínimas requeridas tiene que ser capaz de proteger toda la carga por completo.

• Requiere 2 generadores capaces de soportar independientemente toda la carga de al instalación

• Todo el sistema es tolerante a fallos• Puede mantenerse sin exponer a los sistemas a interrupciones de servicio• Corresponde al TIER4 en la TIA-942

Siempre y cuando se utilicen 2 suministradores de energía eléctrica distintos

Page 14: Conceptos de Data Center

14

EoR vs TOR Architecture

Page 15: Conceptos de Data Center

15

LEAF and SPINE Data Fabric

Page 16: Conceptos de Data Center

16

02 Servers

Page 17: Conceptos de Data Center

17

Servers

• Los servers son equipos de procesamiento que a diferencia de una computadora personal pueden tener más procesadores, memorias, redundancia de hardware, etc

• Pueden ser en rack de 1U ó 2U ó en blade.

Page 18: Conceptos de Data Center

18

Servers

• Están compuestos por una ó más:• fuentes de alimentación, • CPU• perisféricos• HBA (Host Bus Adapter) • controladoras RAID• discos• sistema de refrigeración • gestión remota

Page 19: Conceptos de Data Center

19

Servers: Procesador CPU• Un CPU o procesador es un circuito electrónico que responde o procesa instrucciones.• Las 4 funciones del procesador son:

• Captar • Decodificar• Ejecutar• Reescribir

• Existen tecnologías como Intel Hyperthreading que posibilitan que un procesador actúe como 2 procesadores lógicos en los cuales puede ejecutar 2 aplicaciones al mismo tiempo.

• Existen procesadores con varias arquitecturas, procesadores con varios cores en una misma pastilla, o que inclusive tenga un chipset para controlar perisféricos.

• Un concepto a tener en cuenta es el término “core”. Los mismos son como “CPUs” dentro de una pastilla llamada procesador.

• Generalmente un CPU físico tiene uno ó más cores. En la actualidad existen procesadores que tienen desde 4 hasta 28 cores en una pastilla física.

Page 20: Conceptos de Data Center

20

Servers: Memoria• Son un elemento especial en una computadora o en un server, ya que almacenan información.

• En general puede haber memorias de tipo RAM ó Cache.

• Las RAM son memorias rápidas en cuanto al tiempo de lectura y escritura en el que el procesador puede leer y almacenar datos.

• Las memorias Cache generalmente son muy rápidas y se encuentran incluídas dentro de la pastilla del CPU, por lo que su acceso es mucho más rápido y con menor latencia.

• Las memorias cache vienen típicamente en 4; 8; 16 Gbytes ó más.

Page 21: Conceptos de Data Center

21

Servers: NUMA• Non Uniform Memory Access “NUMA” es un método de configurar en un cluster de

microprocesadores en un sistema de multiprocesamiento el acceso local a la memoria compartida,mejorando la performance, y haciendo el sistema más fácil y expandible.

• Cada cluster se denomina NUMA Node.

• En los servers usualmente con varios procesadores, caad uno de estos tiene recursos de memoria y dispositivos de I/O.

Page 22: Conceptos de Data Center

22

Servers: NUMA• En un server los NUMA nodes están interconectados por módulos de interconexión para

intercambiar datos.

• Si el CPU intenta acceder a una memoria remota (Que no está en su “NUMA”), deberá esperar un tiempo. Entonces la performance de NUMA no es lineal a medida que se agregan CPUs.

• El acceso por parte de una CPU a una memoria dentro de su NUMA es mucho más rápido que el acceso a la memoria de un NUMA remoto.

Page 23: Conceptos de Data Center

23

Servers: Peripherals (perisféricos)• Entre los perisféricos más utilizados en los servers encontramos:

• NIC (Network Interface Card)• HBA (Host Bus Adapter)• RAID controller

• La NIC es el componente básico de conectividad del server con el mundo exterior.

• Algunas características de las placas de red son:• Interrupción y DMA (Direct Memory Access)• Múltiples colas de Rx y Tx• Partición de las tarjetas físicas en varias tarjetas lógicas• Descarga TCP

• Las velocidades de las NIC son 1Gb; 10Gb; 25Gb; y 40Gb.

Page 24: Conceptos de Data Center

24

Servers: Peripherals (perisféricos)• Los dispositivos HBA sirven para conectar los servers contra los dispositivos de storage.

• Tipos de HBA: • CNA (Converged Network Adapter) para:

• FCoE• FC• iSCSI (SCSI over IP)

Page 25: Conceptos de Data Center

25

Servers: Peripherals (perisféricos)• Los RAID Controller son placas que manejan los discos físicos del server.

• Pueden manejar a niver hardware los RAID level.

Page 26: Conceptos de Data Center

26

03 Storage

Page 27: Conceptos de Data Center

27

Storage (almacenamiento)

• Protocolos de interfaz

• Discos y storage

• RAID

• High End Storage

• Mid Range Storage

Page 28: Conceptos de Data Center

28

Storage: Protocolos de interfaz• Los protocolos de interfaz de alamcenamiento permiten la comunicación entre el nodo o host y el

almacenamiento.

• Los protocolos de interfaz se implementan usando interfaces o controllerstanto en el origen como en el destino.

• Protocolos de interfaz:• SATA:

• NEARLINE SAS

• SCSI SAS

• FIBRE CHANNEL

• IP

• FCoE

Page 29: Conceptos de Data Center

29

Storage: Protocolos de interfaz• Protocolos de interfaz SATA:

• Es la versión serie del viejo protocolo IDE/ATA• Alta performance • Bajo costo• La revisión 3 de éste protocolo SATA permite llegar a 6 Gbit/s.

• Protocolos de interfaz NEARLINE SAS:• Es una optimización del firmware en los discos SATA.

Page 30: Conceptos de Data Center

30

Storage: Protocolos de interfaz• Protocolos de interfaz SCSI:

• SCSI emergió como uno de los protocolos preferidos para el ambiente de servidores, este usa una transmisión paralela de datos mejorando mucho (comparado al viejo ATA):• la performance• Escalabilidad• Compatibilidad

• Pero tiene como desventaja:• El costo, lo que limitó su popularidad en las PCs

• Sin embargo:• Con el paso de los años éste protocolo fue mejorando • Nació una alternativa llamada SAS

• Protocolos de interfaz SAS:• SAS es un protocolo serial de transferencias hasta los 6Gbit/s• Las controladoras SAS son compatibles con discos SATA (ya que comparten el mismo

formato de cables y conexionado)

Page 31: Conceptos de Data Center

31

Storage: Protocolos de interfaz• Protocolos de interfaz Fibre Channel:

• Fibre Channel es un protocolo ampliamente usado en comunicaciones con dispositivos de almacenamiento de alta velocidad.

• Es un protocolo de transmisión serial• Opera con cableado de cobre u óptico• La última versión de éste protocolo es de 16Gbit/s

• Protocolos de interfaz IP:• El protocolo IP se usa generalmente para transferencia de datos entre hosts• Pero con las mejoras en las capas de enlace hicieron de éste protocolo una alternativa

para el acceso a dispositivos de almacenamiento• Ventajas:

• Costo• Madurez• Las Empresas pueden amortizar el costo con infraestructura existente

• Existen dos protocolos: • iSCSI Actualmente es el más utilizado debido a su fácil implementación• FCIP

Page 32: Conceptos de Data Center

32

Storage: Protocolos de interfaz• Protocolos de interfaz FCoE:

• Fibre Channel es un protocolo ampliamente utilizado en redes de datacenters• Se intentó unificar Fibre Channel con Ethernet, que es un protocolo ya muy maduro• FCoE es una modificación de Ethernet en la cual se utilizan adaptadores llamados CNA• Los adaptadores CNA se conectan a switchs compatibles con dicho protocolo.• Desventaja:

• Si bien se lo propone como el sucesor de FC no es fácil su configuración

Page 33: Conceptos de Data Center

33

Discos y Storage• Discos Mecánicos:

• Son dispositivos de almacenamiento de datos• Pueden ser:

• Mecánicos• Ó de memorias

• Los datos pueden ser accedidos mediante protocolos de interfaz• Características:

• Capacidad• Protocolo de interfaz• Velocidad de rotación (sólo en los discos mecánicos)

• 5400 (SATA)• 7200 (SATA)• 10K (SAS / FC)• 15K (SAS / FC)• Influye en el llamado seek time : Tiempo que le toma a la cabeza de R/W en

posicionarse a través del plato con un movimiento radial.• Rotational time: Es el tiempo que cuando la cabeza de R/W se posicionó sobre la

pista tarda en leer el sector seleccionado.• Formato

Page 34: Conceptos de Data Center

34

Discos y Storage• Discos SSD:

• Son dispositivos de almacenamiento de datos en memorias flash• Son mucho más rápidos No existen tiempos de seek y rotational• Existen varias tecnologías de SSD• Algunas tecnologías SSD están basadas en el Mercado de Enterprise ó Consumidores• La diferencia es el tiempo de duración, ya que tienen un tiempo de vida más limitado.

Page 35: Conceptos de Data Center

35

Discos y Storage• Storage:

• El storage es el componente del datacenter donde se almacenan los datos• Utilizan discos para llevar dicha tarea• Diferencia entre el storage y los discos:

• El storage provee protección, más velocidad y otras características que no posee el disco• El storage puede:

• concatenar varios discos• Utilizar discos en paralelo• Agregar un cache de memoria• Puede proveer redundancia de conexión al host

• Al storage se puede acceder de dos formas:• Blocks• Files

Page 36: Conceptos de Data Center

36

Discos y Storage• Formas de acceder al storage:

Page 37: Conceptos de Data Center

37

Discos y Storage• Formas de acceder al storage:

• La forma de acceder al storage (Block ó File) define el tipo de red de acceso.• Block level Access: Se denomina SAN Con protocolo de Acceso FC o FCoE• File level Access: Se denomina NAS Con protocolo de Acceso NFS ó CIFS

• Hay un caso particular para iSCSI:• Tiene acceso a nivel de Blocks pero se lo toma como NAS

Page 38: Conceptos de Data Center

38

Discos y Storage: RAID: • RAID es una tecnología que aprovecha múltiples discos como parte del conjunto que

proporciona protección de datos contra fallas de unidades.

• RAID: Redundant Array of Independent Disks

• Generalmente las implementaciones de RAID mejoran el rendimiento del sistema de almacenamiento al servir las I/O de múltiples discos simultáneamente.

• Los arrays modernos con unidades de flash también se benefician en términos de protección y rendimiento mediante el uso de RAID.

• El RAID se puede implementar por hardware ó por software

• Las implementaciones de RAID por hardware son las que más performance brindan.

Page 39: Conceptos de Data Center

39

Discos y Storage: RAID: • RAID 0

• Utiliza técnicas de concatenación de datos

• Los datos concatenados utilizando todos los discos dentro de un conjunto de RAID.

• La capacidad total de RAID es la suma de la capacidad de todos los discos.

• Tiene un rendimiento alto (Y éste aumenta a mayor cantidad de discos debido a que puede leer y escribir al mismo tiempo de discos distintos

• Este tipo de RAID no proporciona protección

• La rotura de un disco del RAID puede corromper el mismo.

Page 40: Conceptos de Data Center

40

Discos y Storage: RAID: • RAID 1• Este tipo de RAID se basa en el espejado de datos• Un dato que necesita ser escrito en el RAID Debe ser escrito primeramente en los discos que

lo componen.• La capacidad del RAID va a ser igual a la del disco de menor capacidad.• Por eso generalmente se utilizan discos iguales.• Este RAID es ideal Para aplicaciones que necesitan alta disponibilidad sin importar el costo

Page 41: Conceptos de Data Center

41

Discos y Storage: RAID: • NESTED RAID (RAID anidado)

• A veces se requiere redundancia y performance de un RAID y grandes cantidades de almacenamiento.

• Una manera de llegar a ese objetivo es aplicar RAID anidado• Se utiliza una RAID1+0 (También llamado RAID 10) ó un RAID0+1

Page 42: Conceptos de Data Center

42

Discos y Storage: RAID: • Bajo condiciones normales ambos tipos de RAID10 y RAID 0+1 ofrecen el mismo tipo de beneficios.

• Las diferencias se notan al momento de realizar la recuperación de un disco fallado:

• En el RAID 10 sólo el espejo es recuperado

• En el RAID 0+1 toda la concatenación es recuperada esto genera un incremento innecesario de los discos que sobrevivieron y pone al RAID más vulnerable a la falla de un segundo disco.

Page 43: Conceptos de Data Center

43

Discos y Storage: RAID: • RAID 5

• Este tipo de RAID es muy utilizado en aplicacionesde lectura random intensiva en general.

• Requiere mínimamente 3 discos.

• Lo que genera de cada dato entrante es la paridad(función XOR) y la misma va rotando de discos.

• Existen RAID no muy utilizados que implementan un disco para paridad. (Eso es RAID 3 si divide por bytes óRAID 4 si divide por blocks)

Page 44: Conceptos de Data Center

44

Discos y Storage: RAID: • RAID 6

• Debido a que los discos son cada vez más grandes en capacidad.

• Ante una rotura de un disco de un RAID5:Se tomaría mucho tiempo en recuperarlo.Y generalmente éste proceso aumenta la

actividad de los discos.Exponiendo al RAID a la rotura de un segundo disco.

• Para mitigar éste problema:Se creó el RAID6 que genera dos paridadesY exige un mínimo de 4 discos.

Page 45: Conceptos de Data Center

45

Discos y Storage: RAID: • Comparación de RAID

Page 46: Conceptos de Data Center

46

Discos y Storage: RAID: • HOT Spare

• Un disco catalogado como HOT Spare es un disco que no se utiliza

• Reemplaza transitoriamente o no a un disco roto.

• Al momento que un disco del RAID se rompe empiezan a reconstruirse los datos en un disco HOT Spare.

• Esto minimiza el tiempo entre la rotura y el recambio del disco fallado.

• Ante el cambio de disco, éste podría quedar en HOT Spare nuevamente ó que el nuevo disco cumpla ésta función.

Page 47: Conceptos de Data Center

47

Discos y Storage: RAID: • HighEnd Storage

Un storage HighEnd es un sistema que:

• Trabaja activo–activo• Con una cantidad de N controladoras• Gran cantidad de cache• Una matriz de acceso a los discos

• Trabajar en modo activo-activo implica que el host puede enviar datos a cualquierade las controladoras disponibles

• En general son equipos que tienen una alta disponibilidad

ya que sus componentes pueden sercambiados en funcionamiento sin afectar el funcionamiento del dispositivo.• LUN es Logical Unit Number.

Page 48: Conceptos de Data Center

48

Discos y Storage: RAID: • MidRange Storage

Un Midrange storage es un sistema que:

• Trabaja activo–pasivo• A un costo menor que los HighEnd storage• Hay 2 controladoras (activa y pasiva)• El host debe leer y escribir sobre la

controladora activa.• No hay matriz interna de acceso a discos

Page 49: Conceptos de Data Center

49

04 Backup

Page 50: Conceptos de Data Center

50

Backup: • Un backup es una copia adicional de los datos de producción

• Creado y retenido solamente para el propósito de recuperarlo ante una pérdidaó una corrupción de datos.

• Ventana de backup: • Es el período durante el cual la fuente de backup está disponible para realizar el respaldo.• A veces realizar un respaldo requiere:

• Que las operaciones en la fuente sean suspendidas para que los datos sean consistentes• Ó quizás que el respaldo no afecte la performance.

Page 51: Conceptos de Data Center

51

Backup• Granularidad:

• Full backup: • Es el respaldo completo de los volúmenes de producción• Se realiza copiando los datos a los dispositivos de almacenamiento de respaldo• Provee un rápido recupero de datos.• Pero:

• Requiere mucho almacenamiento • Toma mucho tiempo de realizar

• Incremental backup:• Es el respaldo de los datos que cambiaron desde:

• el último full backup• ó el último incremental (el último que se haya realizado recientemente)

• Es mucho más rápido (Ya que sólo copia los datos que cambiaron) que un full backup• Pero: Toma más tiempo de recuperar

• Diferential o cumulative backup:• Este tipo de backup siempre respalda los datos desde el último full backup• Toma más tiempo que un backup incremental pero el restore es mucho más rápido

Page 52: Conceptos de Data Center

52

Backup• Granularidad:

• Syntethic Full backup: • Es un backup full• Pero creado utilizando el último full backup y todos los incrementales más

recientes• Este tipo de respaldo al generarse en el server no requiere recursos de la fuente.

Page 53: Conceptos de Data Center

53

Backup• Arquitectura:

• Un sistema de backup generalmente usa una arquitectura de cliente-servidor

• Con un servidor de backup y múltiples clientes

• El servidor:• Gestiona las operaciones de backup• Mantiene el catálogo que contiene la información sobre los backups y la metadata.• La configuración de backup contiene información sobre los datos respaldados

Page 54: Conceptos de Data Center

54

Backup• Arquitectura:

• El cliente:• Envía los datos a ser respaldados hacia el storage node o media server.• También envía información al servidor de backup de metadata.• Storage node ó media server: Es el responsable de escribir los datos a respaldar en el

dispositivo de backup. Este puede ser:• Un disco• Una cinta• Un VTL (Virtual Tape Library)

• Este también envía información al servidor de backup.• En general el servidor de backup está integrado en el mismo equipo al media server o

storage node.

Page 55: Conceptos de Data Center

55

Backup• Arquitectura:

Page 56: Conceptos de Data Center

56

Backup• Topologías:

• Existen básicamente 3 topologías básicas de ambientes de backups:

• Direct attached backup.

• LAN-based backup.

• SAN-based backup

• También es posible una combinación de LAN y SAN

Page 57: Conceptos de Data Center

57

Backup• Direct attached backup:

• El cliente de backup asume el rol de media server o storage node• El cliente envía los datos directamente al dispositivo de backup• Este método evita el envío de los datos por la LAN

Page 58: Conceptos de Data Center

58

Backup• LAN based backup:

• Los roles están diferenciados y están conectados a través de una LAN, donde los datos a ser respaldados tienen que ser transferidos entre el cliente y el storage o media server.

Page 59: Conceptos de Data Center

59

Backup• SAN-based backup:

• También se denomina LAN-free backup.• Es una de las soluciones más apropiadas• Todo el tráfico de respaldo pasa por la SAN y es tomado directamente por el storage node de

los discos y enviado al dispositivo de backup.

Page 60: Conceptos de Data Center

60

Backup• Mixed topology:

• Es una topología mezcla de las anteriores. Se puede implementar por algunas razones como:• Costos• Localización de los servers• Consideraciones de performance

Page 61: Conceptos de Data Center

61

Backup• Imaged-based backup:

• Este método funciona a nivel de hypervisor• Es un método utilizado enambientes de virtualización• Crea una copia del sistemaoperativo Guest (VM) y del estadode configuración de la VM

mediante un snapshot• El backup se almacena como una imagen que es montada en un servidor proxy que actuacomo cliente de respaldo.• El software de backup respalda dicha imagen

Con esto se reduce la carga en el hypervisor y en la VM• A veces el Restore podrían implicar la importación de una VM ó hasta incluso poder restaurar

datos de la misma VM sobre la que está funcionando.• Esto requiere la instalación del agente de backup en la VM

Page 62: Conceptos de Data Center

62

Backup• De-duplicación:

• Es un proceso para identificar y eliminar datos redundantes.

• Cuando los datos duplicados son detectados en un respaldo los mismos son eliminados

• Sólo se hace referencia a la copia ya respaldada

• La de-deduplicación permite:• Reducir el uso de almacenamiento para respaldo • Acortar la ventana de trabajo• Reducir el uso de la red

• Existen dos métodos (cada uno tiene sus beneficios): • Realizarlo en el origen• Realizarlo en el destino

• También existen procesos de de-duplicaciones in line ó con procesos off line posteriores

Page 63: Conceptos de Data Center

63

Backup• De-duplicación:

Page 64: Conceptos de Data Center

64

05 Conceptos NFV

Page 65: Conceptos de Data Center

65

Conceptos NFV• Hyperthreading:

• Es una tecnología de Intel que permite que un solo core físico actúe como 2 cores lógicos separados para el sistema operativo y las aplicaciones

• Esta tecnología mejora mucho la performance de ciertas aplicaciones• Depende mucho del tipo de aplicación podrá ser el doble de performance• Este tipo de configuración se habilita desde el BIOS del hardware

Page 66: Conceptos de Data Center

66

Conceptos NFV• CPU Pinning:

• Es la habilidad que existe para que una VM corra en un core físico específico. Crea un mapeo entre un vCPU y un CPU físico. Esto mejora mucho la performance de la VM.

Page 67: Conceptos de Data Center

67

Conceptos NFV• Flavors:

• Los Flavors definen varios parámetros de una VM (Virtual Machine)Por ejemplo:

• vCPUs• Memoria• Affinity ó Anti-affinity con otra VM• Storage• CPU pinning• NUMA

Page 68: Conceptos de Data Center

68

Conceptos NFV• Open vSwitch “OVS”:

• Es un software de código abierto.• Diseñado para ser utilizado como un switch virtual en entornos de servers virtualizados• Es el encargado de reenviar el tráfico entre diferentes VMs en el mismo host físico (server

físico)• Y también reenviar el tráfico entre las VMs y la Red física (Los NICs del server)• El Open vSwitch es una de las implementaciones más populares de OpenFlow• Open vSwitch soporta numerosas tecnologías basadas en Linux.

• Open vSwitch proporciona 2 protocolos de gestión externa diseñados para su gestión remota desde controladores SDN:• OpenFlow: Permite consultar y modificar las tablas de flujos, haciendo que se reprograme

dinámicamente el comportamiento del OVS con éste protocolo, para que los paquetes puedan ser enviados de un port a otro port del OVS.

• OVSDB (Open vSwitch DataBase Management Protocol): Protocolo para gestionar y modificar la configuración del OVS.

Page 69: Conceptos de Data Center
Page 70: Conceptos de Data Center

Chapter 1. Introducing OpenStack

At the Vancouver OpenStack Conference in May 2015, US retail giant Walmart

announced that they had deployed an OpenStack cloud with 140,000 cores of

compute supporting 1.5 billion page views on Cyber Monday. CERN, a longtime

OpenStack user, announced that their OpenStack private cloud had grown

to 100,000 cores running computational workloads on two petabytes of disk in

production. Another 250 companies and organizations across nine industry

verticals have announced that they have adopted OpenStack in their data centers.

OpenStack had completely redrawn the private cloud landscape in the five short

years of its existence. In this chapter, we'll look at what OpenStack is and why it

has been so influential. We'll also take the first steps in architecting a cloud.

Page 71: Conceptos de Data Center

What is OpenStack?

OpenStack is best defined by its use cases, as users and contributors approach the software with many different goals in mind.

For hosting providers such as Rackspace, OpenStack provides the infrastructure for a multitenant shared services platform.

For others, it might provide a mechanism for provisioning data and compute for a distributed business intelligence application.

There are a few answers to this question that are relevant regardless of your organization's use case.

Page 72: Conceptos de Data Center

OpenStack is an API

One of the initial goals of OpenStack was to provide Application Program Interface (API) compatibility with the Amazon Web Service.

As of the November 2014 user survey, 44% of production deployments were still using the EC2 Compatibility API to interact with the system.

As the popularity of the platform has increased, the OpenStack API has become a de facto standard on its own.

Every feature or function of OpenStack is exposed in one of its REST APIs. -Representational State Transfer (REST)- is an architectural style that defines a set of constraints to be used for creating web services.

There are command-line interfaces for OpenStack (legacy nova and the newer openstack common client) as well as a standard web interface (Horizon).

However, most interactions between the components and end users happen over the API.

This is advantageous for the following reasons:

Everything in the system can be automated

Integration with other systems is well defined

Use cases can be clearly defined and automatically tested

Page 73: Conceptos de Data Center

OpenStack - an open source software project

OpenStack is an open source software project which has a huge number of contributors from a wide range of organizations.

OpenStack was originally created by NASA and Rackspace. Rackspace is still a significant contributor to OpenStack, but these days contributions to the project come from a wide array of companies, including the traditional open source contributors (Red Hat, IBM, and HP) as well as companies which are dedicated entirely to OpenStack (Mirantis, and CloudBase).

Contributions come in the form of drivers for particular pieces of infrastructure (that is, Cinder block storage drivers or Neutron SDN drivers), bug fixes, or new features in the core projects.

OpenStack is governed by a foundation. Membership in the foundation is free and open to anyone who wishes to join. There are currently thousands of members in the foundation. Note

Page 74: Conceptos de Data Center

OpenStack - a private cloud platform

Finally, OpenStack provides the software modules necessary to build an automated private cloud platform. While OpenStack has traditionally been focused on providing Infrastructure as a Service capabilities in the style of Amazon Web Services, new projects have been introduced lately, which begin to provide capabilities which might be associated more with Platform as a Service.

The most important aspect of OpenStack pertaining to its usage as a private

cloud platform is the tenant model. The authentication and authorization services

which provide this model are implemented in the Identity service, Keystone.

Every virtual or physical object governed by the OpenStack system exists within

a private space referred to as a tenant or project.

Page 75: Conceptos de Data Center

Rapid application development

The primary driver for enterprise adoption of OpenStack has been the increasing

use of continuous integration and continuous delivery in the application

development workflow. A typical Continuous Integration and Continuous

Delivery (CI/CD) workflow will deploy a complete application on every

developer commit which passes basic unit tests in order to perform automated

integration testing. These application deployments live as long as it takes to run

the unit tests and then an automated process tears down the deployment once the

tests pass or fail. This workflow is easily facilitated with a combination of

OpenStack Compute and Network services. Indeed, 92% of OpenStack users

reported using their private clouds for CI/CD workflows in the Kilo user survey.

Page 76: Conceptos de Data Center

Network Function Virtualization

An emerging and exciting use case for OpenStack is Network Function Virtualization (NFV).

NFV solves a problem particular to the telecommunications industry, which is in the process of replacing the purpose built hardware devices which provide network services with virtualized appliances which run on commodity hardware.

Some of these services are routing, proxies, content filtering as well as packet core services and high volume switching.

Most of these appliances have intense compute requirements and are largely stateless. These workloads are well-suited for the OpenStack compute model.

NFV use cases typically leverage hardware features which can directly attach compute instances to physical network interfaces on compute nodes.

Instances are also typically very sensitive to CPU and memory topology (NUMA) and virtual cores tend to be mapped directly to physical cores.

These deployments focus heavily on the Compute service and typically don't make use of OpenStack services such as Object Storage or Orchestration.

Architects of NFV solutions will focus primarily on virtual instance placement and performance issues and less on tenancy and integration issues.

Page 77: Conceptos de Data Center

Considerations for performance-intensive workloads

Some workloads, particularly Network Function Virtualization (NFV) workloads, have very specific performance requirements that need to be addressed in the hardware selection process.

A few improvements to the scheduling of instances have recently been added to the Nova compute service in order to enable these workloads.

Page 78: Conceptos de Data Center

NFV is a new developing use case for OpenStack to be used to be the

infrastructure for workloads that would replace dedicated network appliances.

Routers, firewalls, proxy servers, and packet core devices are some of the

common virtual network functions that are being replaced today. Not only does

NFV reduce the physical hardware needed to operate telecommunication

businesses but it also brings all of the benefits of cloud native application

orchestration to a business case that has an increasing need for agility to supply consumers with services faster than ever.

Page 79: Conceptos de Data Center

The first improvement allows for passing through a PCI device (Peripheral Component Interconnect) directly from the hardware into the instance.

In standard OpenStack Neutron networking, packets traverse a set of bridges between the instance's virtual interface and the actual network interface. The amount of overhead in this virtual networking is significant, as each packet consumes CPU resources each time it traverses an interface or switch.

This performance overhead can be eliminated by allowing the virtual machine to have direct access to the network device via a technology called SR-IOV.

SR-IOV allows a single network adapter to appear as multiple network adapters to the operating system. Each of these virtual network adapters (referred to as virtual functions, or VFs) can be directly associated with a virtual instance.

Page 80: Conceptos de Data Center

The second improvement allows the Nova scheduler to specify the CPU and

memory zone for an instance on a compute node which has Non-Uniform

Memory Access (NUMA).

In these NUMA systems, a certain amount of memory is located closer to a certain set of processor cores. A performance penalty occurs when processes access memory pages in a region which is nonadjacent. Another significant performance penalty occurs when processes move from one memory zone to another. To get around these performance penalties, the Nova scheduler has the ability to pin the virtual CPUs of an instance to physical cores in the underlying compute node. It also has the ability to restrict a virtual instance to a given memory region associated with those virtual CPUs, effectively constraining the instance to a specified NUMA zone.

Page 81: Conceptos de Data Center

The last major performance improvement in the Nova Compute service is

around memory page allocation.

By default, the Linux operating system allocates memory in 4 kilobyte pages on 64-bit Intel systems. While this makes a lot of sense for traditional workloads (it maps to the size of a typical filesystem block), it can have an adverse effect on memory allocation performance in virtual machines. The Linux operating system also allows for 2 megabyte and 1 gigabyte sized memory pages, commonly referred to as huge pages.

The Kilo release of OpenStack included support for using huge pages to back virtual instances.

Page 82: Conceptos de Data Center

The combination of PCI-passthrough, CPU and memory pinning, and huge page

support allow dramatic performance improvements for virtual instances in

OpenStack and are required for workloads such as NFV.

They have some implications for hardware selection that are worth noting, though.

Typical NFV instances will expect to have an entire NUMA zone dedicated to them. As such, these are typically very large flavors and the flavors tend to be application specific.

They're also hardware-specific-if your flavor specifies that the instance needs 16 virtual CPUs and 32 gigabytes of memory, then the hardware needs to have a NUMA zone with 16 physical cores and 32 gigabytes of memory available.

Also, if that NUMA zone has an addition 32 gigabytes of memory configured, it will be unavailable to the rest of the system as the instance has exclusive access to that zone.

Page 83: Conceptos de Data Center

RED HAT OPENSTACK PLATFORM

Overview

Telco Technology Office

Spring, 2017

Page 84: Conceptos de Data Center

Red Hat OpenStack Platform

●Intro

●Importance of Linux

●NFV

●Life Cycle

Page 85: Conceptos de Data Center

#EMEA TTO #OpenStack3

WHAT IS OPENSTACK ?

● An interoperability standard

● A development community

● A very active Open Source project

● Provides all of the building blocks to create an Infrastructure-

as-a-Service cloud

● Governed by the vendor agnostic OpenStack Foundation

Page 86: Conceptos de Data Center

#EMEA TTO #OpenStack4

OPENSTACK POWERS DIGITAL BUSINESS

Brings public cloud capabilities into your datacenter

Provides massive on-demand (scale-out) capacity:

1,000’s 10,000’s 100k’s of VMs

Removes vendor lock-in

Open source provides high-degree of flexibility to customize and interoperate

Community development = higher “feature velocity”

Features & functions you need, faster to market over proprietary software

Greater automation, resource provisioning, and scaling

Page 87: Conceptos de Data Center

#EMEA TTO #OpenStack5

OPENSTACK ARCHITECTURE*

● OpenStack is made up of individual autonomous components

● Introduce the concept of Iaas with multi-tenancy

● All of which are designed to scale-out to accommodate throughput and availability

● OpenStack is considered more of a framework, that relies on drivers and plugins

● Largely written in Python and is heavily dependent on Linux

* Based on Newton release (Red Hat OSP 10)

Page 88: Conceptos de Data Center

#EMEA TTO #OpenStack6

CO-ENGINEERED WITH RHEL

Windows Windows WindowsLinuxLinux

SUPPORTED GUESTS

OpenStack

RHEL + KVMCeph OVS

Storage Network

SERVERS

Virtualization Security Ecosystem Network Storage

KVM Network Stack

Device Drivers

LINUX KERNEL

Security Enhanced Linux (SELinux)

A typical OpenStack cloud is made up of at least 10 core

services + plugins to interact with 3rd party systems

Page 89: Conceptos de Data Center

#EMEA TTO #OpenStack7

INFORMATION AND COMMUNICATIONS TECHNOLOGIES

ADDRESSING THE COMPLETE INDUSTRY NEED

PERFORMANCE AVAILABILITY SECURITY MANAGEABILITY LIFECYCLE

Enhanced Platform

Awareness (EPA) SR-

IOV, OVS/DPDK, vCPU,

NUMA pinning, Huge

pages, RT-KVM*...

High Availability

Fault Tolerant Design

Enterprise Hardened Code

Instance Availability

End-to-End

SELinux sVirt

Neutron Security Groups

Block Encryption

SSL/TLS on APIs

Logging

Performance Monitoring

Operational Visibility

Policy and Compliance

OpenStack Lifecycle

Updates/Patches

Page 90: Conceptos de Data Center

#EMEA TTO #OpenStack8

RH-OSP in NFV Architecture

MANONFV Infrastructure

OSS / BSS

Element

Manager (EM)

Virtual

Network

Function

(VNF)

VM

RHEL

VM VM

RHEL

VM

RHEL

DPD

K

Red Hat OpenStack Platform (RH-OSP)

Red Hat Enterprise Linux (RHEL)

KVM

(QEMU/libvirt

)

OVS

DPDKOpenDaylight Ceph

Red Hat

OpenStack

Platform

RHEL

CloudForms

Director

VNF

Manager

NFV

Orchestrator

Page 91: Conceptos de Data Center

#EMEA TTO #OpenStack9

CUSTOMERS DESIRING LONGER LIFE VERSION CUSTOMERS DESIRING LATEST FEATURES

Long life versions offered every 3rd release Offered on each release

Offers standard 3-year lifecycle, with optional

1-2 years of ELS (extended lifecycle support)

Supported for 1 year

Will offer long life → long life automated migrations

Utilize director for automated upgrades and

updates continuously

New Lifecycle Consumption Options

Page 92: Conceptos de Data Center

#EMEA TTO #OpenStack1

0

Red Hat OpenStack Platform Lifecycle

● Every 18 months, we elect an extended life

support version (E releases)

○ Selected backports available to E releases

○ In-place upgrades from n to n+1 from this release

supported if done within year 1 (with a 6 month

buffer window)

○ Upgrades from E -> E will be done via automated

migration to latest RHOSP available with tooling

provided

○ Extension from 3 to 5 years at additional cost

● Every 6-month release of RHOSP

is supported for 1 year

○ No feature backports

(Production Phase 2 from the

start)

○ In-place/online upgrades

supported during this timeframe

from n to n+1

Page 93: Conceptos de Data Center

#EMEA TTO #OpenStack1

1

Description

Production

Phase 1

Production

Phase 2

Extended Life

Support

Access to previously released content through the Red Hat

Customer Portal Yes Yes Yes

Self help through the Red Hat Customer Portal Yes Yes Yes

Technical Support Unlimited Unlimited Yes

Asynchronous Security Errata (RHSA) Yes Yes Yes

Asynchronous Bug Fix Errata (RHBA) Yes Yes Select

Backporting of new features (potential) Yes No No

Installer updates after newer release is out Yes No No

New partner additions & certification after newer release is out Yes No No

Normal releases N/A 12 month N/A

Long Life releases First 18 months 18-36 months +2 years

Lifecycle Definition

Page 94: Conceptos de Data Center

Red Hat OpenStack Platform

Page 95: Conceptos de Data Center

#EMEA TTO #OpenStack1

3

OpenStack connects two worldsO

pe

rato

r vie

wTenant

vie

w

Developers

Administrators

Page 96: Conceptos de Data Center

#EMEA TTO #OpenStack1

4

Tenant view – the actual OpenStack IaaS user

Limited by what the Operator decides to offer in

that cloud

Operator view – often the same role that has root

access to the systems

Combines configuration files and API actions to

create a working environment for his tenants.

Op

era

tor v

iewT

enant

vie

w

OpenStack connects two worlds

Page 97: Conceptos de Data Center

#EMEA TTO #OpenStack1

5

OpenStack Dashboard

(Horizon)

● Horizon is OpenStack’s web-based self-service portal

● Sits on-top of all of the other OpenStack components via API interaction

● Provides a subset of underlying functionality

● Examples include: instance creation, network configuration, block storage attachment

● Exposes an administrative extension for basic tasks e.g. user creation

Page 98: Conceptos de Data Center

#EMEA TTO #OpenStack1

6

● Similar to Amazon IAM (Identity and Access Management)

● Keystone provides a common authentication and authorisation store for OpenStack

● Responsible for users, their roles, and to which project(s) they belong to

● Provides a catalogue of all other OpenStack services

● All OpenStack services typically rely on Keystone to verify a user’s request

OpenStack Identity Service

(Keystone)

Page 99: Conceptos de Data Center

#EMEA TTO #OpenStack1

7

OpenStack Compute

(Nova)

● Similar to Amazon EC2 (Elastic Compute Cloud)

● Nova is responsible for the lifecycle of running instances within OpenStack

● Manages server resources (CPU, memory, etc)

● Manages multiple different hypervisor types via drivers, e.g: RHEL (+KVM) and VMware vSphere

● NFV specific functions (vCPU pinning, Huge pages, NUMA awareness)

Page 100: Conceptos de Data Center

#EMEA TTO #OpenStack1

8

OpenStack Networking

(Neutron)

● Similar to Amazon VPC (virtual private cloud), ELB (elastic load balancing)

● Neutron is responsible for providing networking to running instances within OpenStack

● Provides an API for defining, configuring, and using networks

● Relies on a plugin architecture for implementation of networks, examples include

- Open vSwitch (default in Red Hat’s distribution)- Cisco, Nuage Networks, VMware NSX, Arista, Mellanox, Brocade, Midokura, Dell, Radware,etc

Page 101: Conceptos de Data Center

#EMEA TTO #OpenStack1

9

OpenStack Object Storage

(Swift)

● Similar to Amazon S3 (Simple Storage Service)

● Swift provides a mechanism for storing and retrieving arbitrary unstructured data

● Provides an object based interface via a RESTful/HTTP-based API

● Highly fault-tolerant with replication, self-healing, and load-balancing (scale out)

● Architected to be implemented using commodity compute and storage

Page 102: Conceptos de Data Center

#EMEA TTO #OpenStack2

0

OpenStack VM Image Storage

(Glance)

● Similar to Amazon AMI (Amazon Machine Images)

● Glance provides a mechanism for the storage and retrieval of disk images/templates

● Supports a wide variety of image formats, including qcow2, vmdk, ami, and ovf

● Many different backend storage options for images, including Swift, Nfs, Ceph…

Page 103: Conceptos de Data Center

#EMEA TTO #OpenStack2

1

OpenStack Block Storage

(Cinder)

● Similar to Amazon EBS (Elastic Block Store)

● Cinder provides storage to instances running within OpenStack

● Used for providing persistent and/or additional storage

● Relies on a plugin/driver architecture for implementation, examples: Ceph, Red Hat Storage

(GlusterFS), IBM XIV, HP Lefthand, 3PAR, etc

Page 104: Conceptos de Data Center

#EMEA TTO #OpenStack2

2

OpenStack Orchestration

(Heat)

● Similar to Amazon CloudFormation and ELB (elastic load balancing)

● Heat facilitates the creation of ‘application stacks ’ made from multiple resources● Stacks are imported as a descriptive template language (YAML)

● Infrastructure resources that can be provisioned includes: servers, floating ips, volumes, security groups, users.

● Heat manages the automated orchestration (scaling included) of resources and their dependencies

● Allows for dynamic scaling of applications based on configurable metrics

Page 105: Conceptos de Data Center

#EMEA TTO #OpenStack2

3

OpenStack Telemetry

(Ceilometer, Gnocchi, Aodh)

● Similar to Amazon CloudWatch

● Ceilometer is a central collection of metering and monitoring data (metrics stored in Gnocchi,

alarming with Aodh)

● Primarily used for chargeback of resource usage

● Ceilometer consumes data from the other components - e.g. via agents

● Architecture is completely extensible - meter what you want to - expose via API

Page 106: Conceptos de Data Center

#EMEA TTO #OpenStack2

4

OpenStack Data Processing

(Sahara)

● Aims to provide users with simple means to provision Hadoop clusters

● Support for different Hadoop distributions (pluggable system)

● Red Hat certify distributions [Cloudera CDH 5.3.0 and HortonWorks Data Platform (HDP) 2.0]

Page 107: Conceptos de Data Center

#EMEA TTO #OpenStack2

5

OpenStack File Systems

(Manila)

● Shared File System as a Service

● NetApp driver integration

● CephFS native driver (Tech Preview)

Page 108: Conceptos de Data Center

#EMEA TTO #OpenStack2

6

● Deployment of OpenStack (undercloud/overcloud)

● Director GUI

● Composable Roles

● TripleO (OpenStack On OpenStack) is a program aimed at installing, upgrading and operating OpenStack clouds using OpenStack's own

cloud facilities as the foundations - building on nova, neutron and heat to automate fleet management at datacentre scale.

OpenStack Deployment & Management

(Director)

Page 109: Conceptos de Data Center

#EMEA TTO #OpenStack2

7

● Tempest - API and scenario testing to validate an OpenStack deployment

● Rally - Deploy, verify, benchmark and testing of OpenStack (Tech Preview)

● DCI - Distributed Continuous Integration○ The certification test suite rhcert runs as part of the CI.

○ This allows partners to maintain Red Hat OpenStack certification current for their plugins.

Other OpenStack Services

Page 110: Conceptos de Data Center

#EMEA TTO #OpenStack2

8

I want an

additional VM

Please

authenticate

with your

credentials

Credentials

verified. Here’s a token to talk

to other

OpenStack

services

Nova

Ok, we need

to find a

place to run

this VM

Nova

Ok, you can

run it hereNeutron, please

setup network

for this VM

I’ve enabled network policy

for your VM.

Here is your

interface

Nova

Cinder, please

create persistent

storage for this

VM

Created, you can

mount it.

NovaHey Glance, can I get

RHEL image?

Thank you

OpenStack!

It’s alive!

Here is your

additional VM

Cinder

Neutron

Glance

A simple workflow...

Page 111: Conceptos de Data Center

Ceph

Software Defined Storage

Page 112: Conceptos de Data Center

#EMEA TTO #OpenStack3

0

Storage Requirements in IaaS

● Commodity hardware for faster time to deploy

● Scale-out architecture to handle data growth

● Pricing elasticity to transition spending towards OPEX

● API-based management to facilitate automation

● Deep integration with OpenStack components and services

● Speed: VM booting, backups, disaster recovery and archiving

Page 113: Conceptos de Data Center

#EMEA TTO #OpenStack3

1

Red Hat Ceph StorageDistributed, enterprise-grade object storage, proven at web scale

Open, massively-scalable, software-defined

Flexible, scale-out architecture on clustered commodity hardware

Single, efficient, unified storage platform (block, object, file)

User-driven storage lifecycle management with 100% API coverage

Integrated, easy-to-use management console

Designed for cloud infrastructure and emerging workloads

Page 114: Conceptos de Data Center

#EMEA TTO #OpenStack3

2

Ceph deeply integrated with OpenStack

● Ceph is bundled with RH-OSP

● Instantaneous booting of 1 or 100s of VMs

● Instant backups via seamless data migration between Glance, Cinder, Nova

● Tiered I/O performance within single cluster

Not

supported

yet

Page 115: Conceptos de Data Center

RHOSP Management

33

Page 116: Conceptos de Data Center

#EMEA TTO #OpenStack3

4

OpenStack Management Day 0 and Beyond

Red Hat OpenStack Director

Pre-deployment

planning

Install &

configureMonitor &

diagnose

Update &

upgradeConsume

Red Hat OpenStack Director

User InterfaceRed Hat CloudForms User Interface

CloudForms Engine

Day 0: Plan Day 1: Deploy Day 2: Operations & usage

Page 117: Conceptos de Data Center

RHOSP Director

35

Page 118: Conceptos de Data Center

#EMEA TTO #OpenStack3

6

From upstream to product

UPSTREAM

OPENSTACK

TripleO RDO-Manager

MIDSTREAM

COMMUNITY

DOWNSTREAM

PRODUCT

UPSTREAM

OPENSTACK

TripleO RDO-Manager

MIDSTREAM

COMMUNITY

RHEL OpenStack Platform

Director

DOWNSTREAM

PRODUCT

Page 119: Conceptos de Data Center

#EMEA TTO #OpenStack3

7

Production, tenant facing cloud

● The OpenStack you know and love

● The cloud that your tenants will use

● Also known as the “Overcloud”

Red Hat OpenStack Platform directorKey Concept: We Have Two Clouds

Red Hat

OpenStack Platform

director

Red Hat

OpenStack Platform

deploys and manages

Deployment and management cloud

● Infrastructure command and control

● Cloud operator visibility only

● Also known as the “Undercloud”

Page 120: Conceptos de Data Center

#EMEA TTO #OpenStack 3

8

Deployment FlowNew hardware, racked and wired

Page 121: Conceptos de Data Center

#EMEA TTO #OpenStack 3

9

Deployment FlowIdentified management node

Page 122: Conceptos de Data Center

#EMEA TTO #OpenStack 4

0

Deployment FlowInstalled Red Hat OpenStack Platform director

director

Page 123: Conceptos de Data Center

#EMEA TTO #OpenStack 4

1

Deployment Flow

director

Registered hardware

Page 124: Conceptos de Data Center

#EMEA TTO #OpenStack 4

2

Deployment Flow

director

Hardware introspected for more detailed specification

Page 125: Conceptos de Data Center

#EMEA TTO #OpenStack 4

3

Deployment Flow

director

Defined networking

Page 126: Conceptos de Data Center

#EMEA TTO #OpenStack 4

4

Deployment Flow

Controller Node

director

Defined controller node

Page 127: Conceptos de Data Center

#EMEA TTO #OpenStack 4

5

Deployment Flow

Resource Node (Compute)

Resource Node (Compute)

Controller Node

director

Defined resource nodes (Compute, Object Storage, Block Storage, Ceph)

Page 128: Conceptos de Data Center

#EMEA TTO #OpenStack 4

6

Deployment Flow

Resource Node (Compute)

Resource Node (Compute)

Controller Node

director

Validating and deploying infrastructure

Page 129: Conceptos de Data Center

#EMEA TTO #OpenStack 4

7

Deployment Flow

director

Resource Node (Compute)

Resource Node (Compute)

Controller Node

Undercloud (management)

Overcloud (workload)

Deployed Red Hat OpenStack Platform

Page 130: Conceptos de Data Center

#EMEA TTO #OpenStack 4

8

Deployment FlowScalable and high available architecture

Page 131: Conceptos de Data Center

#EMEA TTO #OpenStack 4

9

Deployment FlowEven for large and distributed data-centers

Page 132: Conceptos de Data Center

#EMEA TTO #OpenStack 5

0

COMPOSABLE ROLESSCALE & DEPLOY COMPONENTS INDEPENDENTLY (RHOSP 10)

Hardcoded

Controller Role

Custom

Controller Role

Custom

Networker Role

Custom

Telemetry Role

...

Keystone

Neutron

Database

RabbitMQ

Glance

Keystone

Neutron

Telemetry

RabbitMQ

Glance

...

Page 133: Conceptos de Data Center

CloudForms

51

Page 134: Conceptos de Data Center

#EMEA TTO #OpenStack 5

2

Red Hat CloudForms

Cloud Management Platform (CMP)

CONTAINERS

PRIVATE CLOUD PUBLIC CLOUDVIRTUALIZATION

SOFTWARE DEFINED NETWORKING

VMware©

Microsoft© Hyper-V

Red Hat Virtualization

Amazon© Web Services

Microsoft Azure

Google© Cloud Platform

Red Hat© Openstack Platform

Red Hat© OpenShift Container Platform

Service

Automation

Policy &

Compliance

Operational

VisibilityUnified Hybrid

Management

Page 135: Conceptos de Data Center

#EMEA TTO #OpenStack 5

3

Operational Visibility with CloudForms

We now have complete lifecycle

management: provisioning, reconfiguration,

deprovisioning, and retirement.

Automatic resource optimization

intelligently places VMs and offers right-

sizing recommendations.

I can drill-down through infrastructure

layers to determine the root cause.

Resource tracking and trending aids in

capacity and what-if scenario planning.

CHALLENGES

LIFECYCLE MANAGEMENT

ROOT-CAUSE ANALYSIS CAPACITY MANAGEMENT

RESOURCE OPTIMIZATION

Page 136: Conceptos de Data Center

#EMEA TTO #OpenStack 5

4

Root Cause Analysis

● View instance performance and resource usage

over time to pinpoint problem initiation.

● Quickly compare system state against known

good state or other systems.

● Navigate across relationships and drill down

infrastructure layers to identify underlying causes.

Page 137: Conceptos de Data Center

#EMEA TTO #OpenStack 5

5

Performance and Capacity Management

● Continuous data gathering for both

greenfield and brownfield deployments.

● Resource utilization tracking and right-size

recommendations.

● Projection and “what if” tools aid in future planning.

Page 138: Conceptos de Data Center

#EMEA TTO #OpenStack 5

6

Policy and Compliance with CloudForms

CloudForms continuously monitors

systems so they remain compliant/secure.

Smart State Analysis deeply scans

systems to provide policy engine with

detailed information.

Chargeback/ showback reports let

users know the resources they are

utilizing.

Our automatic provisioning includes

automatic policy enforcement.

Quotas prevent over-provisioning

compute, memory or storage resources.

Page 139: Conceptos de Data Center

#EMEA TTO #OpenStack 5

7

Policy Enforcement

● Continuous discovery and deep

SmartState inspection of virtual instances.

● Policy violations can raise alerts or be

remediated automatically.

● Policy can be applied uniformly or based

on virtual instance criteria.

Page 140: Conceptos de Data Center

#EMEA TTO #OpenStack 5

8

Quotas and Charge Back

● Rate schedules per platform and per tenant

with multi-tiered and multi-currency support.

● Quota set by user, role and tenant and

apply to compute, memory and storage

resources.

● Monitor resource usage and report based

on workload or tenant.

Page 141: Conceptos de Data Center

Red Hat OpenStack Platform

Roadmap

Page 142: Conceptos de Data Center

#EMEA TTO #OpenStack6

0

neutron

nova

cinder

horizon

heat

keystone

sahara

ironic

tempest

rally

manila

ceilometer

glance

11Ocata

Red Hat OpenStack Platform - 40k Foot View

Elastic IaaS ⇣

⇡ NFV - Telco

Composable UpgradesVLAN Aware VMs

Ceph RBD Volume ReplicationHCI Full Support

NFV Composable Roles

neutron

nova

cinder

horizon

heat

keystone

sahara

ironic

tempest

rally

manila

ceilometer

glance

10Newton

Elastic IaaS ⇣

⇡ NFV - Telco

Composable Services &

Custom RolesOVS-DPDK full support

Distributed Virtual RoutingDPDK vhost-user multi-queue

Director GUI and validations APISR-IOV director deployment

CONFIDENTIAL - NDA REQUIRED

Page 143: Conceptos de Data Center

THANK YOU

plus.google.com/+RedHat

linkedin.com/company/red-hat

youtube.com/user/RedHatVideos

facebook.com/redhatinc

twitter.com/RedHatNews

Page 144: Conceptos de Data Center

End to End Network

Function Virtualization

Architecture InstantiationHow Intelligent NFV Orchestration based on industry standard

information models will enable Carrier Grade SLAs

Executive Summary

For over 2 years now, since the ETSI NFV ISG inception in Darmstadt October 20121, NFV has been

capturing the Telco industry imagination, its promises and benefits are clear and well understood. Since its inception, the industry has seen huge improvements in base technology layers, including

server’s (Intel), hypervisor technology (Red Hat), and software libraries (Intel® DPDK), enabling the

design of telco-grade Virtual Network Functions (VNFs) (such as Brocade’s VRouter 5600*), which

have proliferated and evolved extensively during this period. As a direct result, a new and open ecosystem of VNF providers is beginning to flourish.

However, this is only part of the industry issue. These VNFs designed for carrier-class scalability, in order to behave as expected, need to be properly deployed in the underlying infrastructure allowing

them to leverage all those new technology advances. NFV Management and Orchestration (MANO) and the associated information models, describing both the infrastructure and VNF requirements,

are key to achieve this goal effectively and in a cost efficient manner for the service provider. Hence, legacy cloud management systems (CMS) will simply not suffice for true NFV deployments.

Cyan, Brocade, Intel, and Red Hat have combined with the Telefónica NFV Reference Lab at their

GCTO Unit in Madrid to showcase how a realistic network scenario can be designed modelled and deployed via NFV Orchestration (Cyan Blue Planet) onto an NFV-ready infrastructure through the

Telefónica design VIM (Virtual Infrastructure Manager). This new and correctly optimized NFV delivery stack is compared to what can be achieved with a typical cloud deployment model as exists today.

The results show the phenomenal benefits achievable through end to end system NFV awareness. The service scenario deployed in Telefónica’s NFV Labs in Madrid shows up to a 100x improvement in throughput for a typical routing scenario with respect to the same scenario as deployed in a

typical enterprise cloud.

Key to unleashing this performance is the correct modelling of the key attributes required from

Virtual Network Functions (VNFs), and exposing this information as the deployment decision

criteria in the NFV delivery stack, i.e., the NFV Orchestrator and the Virtual Infrastructure Manager (VIM). The availability of such NFV-ready orchestration components together with appropriate standardized descriptors for VNFs and infrastructure will be the key enablers to large-scale NFV deployments in coming years.

White Paper

Intel, Brocade, Cyan, Red Hat, and Telefónica – NFV Services

Page 145: Conceptos de Data Center

Table of Contents

Executive Summary ................................ 1

Introduction ............................................. 2

Service Scenario Overview ................... 4

Partners and Contributed Components ..................... 4

Cloud Versus NFV.................................... 4

Scenario Execution Results .................. 5

Conclusions .............................................. 7

Testimonials ............................................. 7

References ................................................ 8

Acronyms ................................................... 8

2

Intel, Brocade, Cyan, Red Hat, and Telefónica – NFV Services

Introduction

A key component of the NFV vision is one of “The Network becoming a Data Centre,” a radically new network enabled through leveraging the commodity

price points and capabilities emerging from the $30-$50 billon per annum

global investment in data center technology to enable the delivery of Telco

grade virtual network appliances on top as VNFs.

Network functions like Evolved Packet Core (EPC), 3G wireless nodes, Broad-

band Network Gateways (BNG), Provider Edge (PE), routers, firewalls, etc., have traditionally been delivered on bespoke standalone appliances. NFV aspires to replace this hardware centric approach with a software model which delivers

comparable functionality as SW VNFs on standard high volume industry server

HW. This transformation is referred to as NFV and the concept is well under-stood and accepted by the Telco industry as a key lever in the transformation

of the network toward a more flexible and mouldable infrastructure.

However this in itself is not sufficient. Deploying a Telco Network service presents additional complexities that typically don’t exist in today’s data center:

• For example, each Telco service delivered to the broadband consumer comes

with a service SLA which must be achieved and enforced. This must also take into account how to achieve proper scale as service adoption ensues. This places essential focus on how data plane workloads are handled in terms

of throughput, packet loss guarantees, and latency effects. These are the attributes which most affect the virtual application performance and which Telco service providers must ensure as part of a customer’s SLA guarantees.

• Additionally, control on network topology, VNF location, link bandwidths, and QoS guarantees are hugely important in Telco. This is the foundation on which Communication Service Providers must deliver their services. This approach deviates greatly from the centralized data center paradigm, where the topology is mostly static and where visibility and basic placement considerations for the

stand-alone VMs are the primary attributes required for service continuity.

• In this new NFV world, the virtual network functions will be delivered by many

different vendors. Unless the community embraces a very well understood and open standard based service information model this new flexibility will become difficult to manage and will in itself become a problem. The proposal in this white paper and the associated E2E implementation is to use TOSCA as the service description language. This also enables an easy extension path to including the key NFV attributes required for this deterministic performance.

As described earlier, huge industry investment has enabled industry standard high volume servers1 to deal effectively with I/O-intensive workloads as required in today’s Telco environments. Thus, most recent x86 processors generations working in conjunction with suitably enabled hypervisors and using specialized open source software libraries such as DPDK (Data Plane Development Kit) have

enabled standard high volume servers to deal efficiently with edge functions such as BNG, PE router, and EPC workloads. This creates the opportunity of enabling reliable NFV deployments ensuring that true Telco grade SLAs are achieved.

In order to enable these new Telco grade services, it is essential that the appro-

priate infrastructure resources are properly allocated to the VNF. Thus, practices such as taking into account the internal server memory topology, CPUs and I/O interfaces allocation to virtual machines, the usage of memory in “huge pages” for efficient lookups, or direct assignment of interfaces to the VM, among others becomes essential to assure a given VNF SLA in terms of performance, scalability, and predictability2. This type of deterministic resource allocation, including this

Page 146: Conceptos de Data Center

3

Intel, Brocade, Cyan, Red Hat, and Telefónica – NFV Services

new enhanced platform awareness ca-

pability, not present in cloud computing

environments, now becomes a necessity

for carrier grade NFV deployments.

The ETSI-NFV reference architecture

defines a layered approach to such an

NFV deployment (see Figure 1).

Toward ensuring portable and

deterministic VNF performance it is

paramount to expose the relevant

NFVI attributes up through the new

NFV delivery stack. This allows the management and orchestration layers

to ensure correct allocation of the

resources for the end-to-end network

service scenario. Likewise, the informa-

OSS/BSS

EMS 1 EMS 2 EMS 3

Os-Ma

Or-Vi

Or-Vnfm

Vi-Vnfm

Ve-Vnfm

Nf-Vi

Execution reference points

Other reference points

Main NFVreferencepoints

NFVOrchestrator

VNFManager(s)

VirtualizedInfrastructure

Manager(s)

Virtual Computing

Virtual Storage

Virtual Network

HARDWARE RESOURCES

VI-Ha

Vn-Nf

NFVI

ComputingHardware

StorageHardware

NetworkHardware

VNF 1 VNF 2 VNF 3

NFV Management and Orchestration

Virtualization Layer

Server, VNF, and Infrastructure

Description

Figure 1. ETSI End to End NFV Architecture.

tion models (Network service VNF and

Infrastructure description) describing

the resource requirements for the

Virtual Network Function (VNF) and

the global service scenario are key to

enable these NFV provisioning layers

(NFV-Orchestrator and VIM) to make these intelligent and optimal deploy-

ment decisions.

This Enhanced Platform Awareness (EPA) capability allows the NFV orches-

tration platform to intelligently deploy

well-designed VNF workloads onto the

appropriate underlying NFVI enabling

the optimal SLAs. This also unleashes the favorable total cost of ownership

(TCO) NFV promises due to this

more efficient use of the underlying infrastructure. This must be achieved through the implementation of a more

versatile NFV ready infrastructure,

and a more agile and competitive

ecosystem of network functions

providers enabled through such an

open information model.

Toward demonstrating this NFV

deployment approach, Intel, Telefónica,

Cyan, Brocade, and Red Hat have

collaborated to implement and demon-

strate a complete ETSI-NFV end to end

service deployment solution stack.

Page 147: Conceptos de Data Center

4

Intel, Brocade, Cyan, Red Hat, and Telefónica – NFV Services

Service Scenario Overview

VNF Routing Scenario Overview

• The scenario being deployed is a

routed VNF forwarding graph using

Brocade Vyatta vRouters as VNFs. A three node network forwarding topology achieves a 40 Gbps network

throughput between the ingress and

egress points at Routers A and C (see Figure 2).

• The exposure of the performance

enablers (NFVI attributes) in the

VNF Descriptor and importance of

a good VNF design (Vyatta vRouter)

are crucial toward enabling this

service deployment.

• The End to End NFV Service delivery

stack with the relevant NFV intel-

ligence built in at each layer, through

the information model, the VNF, the

NFV Orchestrator, the VIM and finally the NFVI are all required for an opti-

mal VNF service chain deployment.

• The use of Industry standard, open,

and extensible information models

such as TOSCA and suitable VNF formats are crucial toward enabling

an open ecosystem of VNF vendors

construct and deliver their services

into this new end to end architecture.

The scenario also showcases the im-

portance of a well-designed standard

high volume industry server HW based

NFVI, which provides the EPA services required for the Deployment of Telco

Grade VNFs.

Partners And Contributed

Components

The lab environment is located at Tele-

fónica’s Global CTO NFV lab in Madrid.

As per Figure 3, the infrastructural components are provided as follow:

Intel components include:

• Intel® Xeon® processor-based servers

and Network Interface cards

– Intel® Xeon® processor E5-2680 v2 @ 2.80 GHz

– Intel® Open Networking Platform

(ONP) Ingredients including

DPDK R1.63

– Intel® X520 10G Network

Interface Cards

Brocade components include:

• Brocade Vyatta vRouter 5600 3.2 R2• OpenFlow switch (Brocade NetIron

MLXe)

The Cyan components include:

• NFV-Blue Planet Orchestrator

release 15.02

The Telefónica components include:

• DPDK R1.6 based Traffic generator TIDGEN (Telefónica I+D Generator)

• Telefónica VIM openvim R0.9

The Red Hat components include:

• RHEL7.0 (with patches) and QEMU-KVM version 2.0.0 (with patches)

Cloud Versus NFV

As mentioned, the demonstration is hosted at Telefónica’s NFV Reference

Lab (physically located in Madrid) and provides two separate deploy-

ment environments (see Figure 4):

• A NFV-ready NFVI pool, with a Telefónica developed NFV ready VIM implementing the requisite Enhanced

Platform Awareness (EPA) and a Cyan NFV-Orchestrator supporting

advanced VNF deployment using

enhanced NFV information models.

• A standard cloud infrastructure pool ala classic cloud computing, with the

same Telefónica VIM connected to the same Cyan NFV-Orchestrator but

in this case not using the enhanced

information model as the basis for

the deployment.

20G

10G

10G

10G 10G

10G

10G 10G

Router B

1 2

0

PE RouterNetworkScenario

40G

Router A

TrafficGenerator

3

ex1ex0

ex3ex2

4

3

4

1 1

0 0

2 240G

Router C

Mgmt : IF

Mgmt : IF Mgmt : IF

Figure 2. PE VNF Routing Service Chain.

Page 148: Conceptos de Data Center

5

Intel, Brocade, Cyan, Red Hat, and Telefónica – NFV Services

Figure 3. Partners and System Component Contributions.

Starting with both server pools empty

(no VNFs deployed), the demo scenario

is deployed onto each platform through

the Orchestrator. With both setups run-

ning, performance measurements are

displayed in real time, showing much

higher and stable throughput in the

NFV-ready scenario.

Information models for both scenarios

are compared, showcasing the key ad-

ditional attributes and end to end EPA awareness required for the optimized NFV deployment.

Scenario Execution Results

Initial Sub Optimal

Cloud Deployment

The initial deployment demonstrates

the typical issues with doing a “blind”

OSS/BSS

EMS 1 EMS 2 EMS 3

Os-Ma

Or-Vi

Or-Vnfm

Vi-Vnfm

Ve-Vnfm

Nf-Vi

Execution reference points

Other reference points

Main NFVreferencepoints

NFVOrchestrator

VNFManager(s)

VirtualizedInfrastructure

Manager(s)

Virtual Computing

Virtual Storage

Virtual Network

HARDWARE RESOURCES

VI-Ha

NVFI

Vn-Nf

ComputingHardware

StorageHardware

NetworkHardware

VNF 1 VNF 2 VNF 3

NFV Management and Orchestration

Virtualization Layer

NFVO

NVFI

VIM

Servers + NICs Hypervisor Switches

vRouters DPDKTraffic Gen.DPDK

VNFs

Server, VNF, and Infrastructure

Description

enterprise-cloud like deployment of

a typical VNF onto an underlying non

NFV optimized infrastructure.

The Brocade routing scenario is

deployed. Since a suboptimal NFV information model is used, the Brocade

vRouter is incorrectly deployed through

the non-aware MANO stack and is unable to fully achieve the 23 Mpps (40 Gbps @ 192 byte packet size) it is designed to achieve, instead reaches a

mere 270 Kpps largely because of the

following (see Figure 5):

• No PCIe* pass through: The NIC is not

directly connected to vRouter, which

now receives and transmits packets

via the vSwitch, this is a subopti-

mal networking path to the VNF as

compared to PCIe pass through mode

and limits the throughput at acceptable

packet loss.

• No NUMA affinity: vCPUs are arbitrarily allocated from CPU socket that may not

be directly attached to the NICs and may

also use a non-local memory bus.

• No CPU pinning: vCPUs allocated to

vRouter may be shared or dynamically

rescheduled limiting determinism.

• No 1G Huge Page setup. This greatly limits the performance achievable

in DPDK (Vyatta) performance and

doesn’t correctly leverage the recent

advances in server IOTBL and VTd

architecture especially for small

packet sizes.

Page 149: Conceptos de Data Center

Figure 5 highlights the sub optimal

performance achieved.

Optimal NFV Deployment

The secondary deployment uses the

correct NFV TOSCA and VNFD Mod-

els and the information in the model

allows the Planet Blue orchestrator to

optimally deploy the Brocade configu-

ration through the Telefónica VIM and achieved full line rate performance of

23 Mpps (40 Gbps @ 192 Bytes). See Figure 5 for performance.

This deployment scenario demonstrates

the benefits in doing an “intelligent” NFV deployment through the EPA aware delivery stack onto underlying NFVI

using the correct extended information

model containing the attributes required

for deterministic VNF performance.

The Brocade vRouter is deployed with

the correct EPA parameters correctly exposed via VNF Descriptor and en-

forced by the Cyan NFVO and the VIM. The Brocade vRouter is able to achieve

the high performance as expected by

design correctly implementing the PCIe

pass through, the NUMA awareness, CPU pining, and huge page requirement

as required by the Brocade VNF.

Figure 6 demonstrates similar line

rate performance but for larger

packet sizes.

6

Intel, Brocade, Cyan, Red Hat, and Telefónica – NFV Services

25

20

15

10

5

0

20/02

13:27:20

20/02

13:27:30

NFV Cloud

20/02

13:27:40

20/02

13:28:50

20/02

13:28:00

20/02

13:28:10

Throughput (Mpps) Throughput (Gbps)

Time

Mp

ps

45

5

10

15

20

25

30

35

40

0

20/02

13:27:20

20/02

13:27:30

20/02

13:27:40

20/02

13:28:50

20/02

13:28:00

20/02

13:28:10

Time

Gb

ps

23 Mpps

270Kpps

5

4

3

2

1

0

20/02

13:47:40

20/02

13:47:50

20/02

13:48:00

20/02

13:48:10

20/02

13:48:20

20/02

13:48:30

Throughput (Mpps) Throughput (Gbps)

Time

Mp

ps

45

5

10

15

20

25

30

35

40

0

20/02

13:47:40

20/02

13:47:50

20/02

13:27:40

20/02

13:48:10

20/02

13:48:20

20/02

13:48:30

Time

Gb

ps

3.2 Mpps

.02Mpps

NFV Cloud

Figure 6. Performance Comparison for 1518 byte frame size.

Figure 5. Performance Comparison for 192 byte frame size.

Figure 4. Cloud vs. NFV.

Hardware

Memory

Hypervisor

OS

Virtual HW

OSCPU

Virtual HW

Virtual Machine

1

Upstream Traffic Downstream Traffic

Traffic

Virtual Machine

2

vSwitch

Core

Core

Core

Core

Core

Core

Core

Core

CLOUD COMPUTING

CLOUD COMPUTING VIEW

NETWORK VIRTUALISATION VIEW

Bottleneck

Bottleneck

Hardware

Hypervisor

Virtual HW

OS SWMbs OS SW

Mbs

Virtual HW

Virtual Machine

1

Virtual Machine

2

NFV

BypassedMax cache sharing,Min memory translations

Data plane is managed directly

I/O Device

I/O Device

Memory

CPU

Core

Core

Core

Core

I/ODevice

I/ODevice

Memory

CPU

Core

Core

Core

Core

I/ODevice

I/ODevice

Minimise QPI usage

Polling mode drivers, full assignment to process

Page 150: Conceptos de Data Center

7

Intel, Brocade, Cyan, Red Hat, and Telefónica – NFV Services

Conclusions

• True End to End NFV aware system

designs will deliver huge VNF perfor-

mance improvements (23 Mpps v 270 Kpps) necessary for Telco

grade performance.

• Properly developed end to end NFV

solutions will reduce Network TCO

and allow a new ecosystem of VNF

providers flourish.

• The community must understand

these enhanced performance attri-

bute (EPA) capabilities and ensure proper exposure up through the End

to End system. This requires taking and end to end system view toward

implementing the appropriate levels

of intelligence up through the NFV

delivery stack to maximize the application performance and deter-

minism required for Telco grade

SLA deployments.

• The VNF community must understand

these capabilities and build/model their VNFs accordingly.

• An intelligent EPA aware Orchestra-

tion and VIM are the key components toward releasing complete NFV

TCO value.

• Intel, Red Hat, Cyan, and Telefónica

will continue to work to enable Open

Stack (VIM) with these critical NFV EPA enhancements4.

• Standard and open information

models are also crucial to enable the

open VNF ecosystem and enable the

transition from the world of monolithic,

vertically integrated network appliances

to SW defined network functions.

• The standardization of the NVF service information model as well

as the availability of open source

components such as DPDK, Open

Stack, and optimized KVM are key components toward unleashing the

promise of open NFV solutions

leveraging best of breed cloud

open source technologies.

Testimonials

Telefónica

“ Telefónica’s vision about Virtualized Network is an E2E virtualization approach, from

customer premises to the inner network infrastructure, as a way to improve capac-

ity and flexibility and to obtain better TCO. Telefónica NFV Reference Lab aims to

help the ecosystem of partners and network equipment vendors to test and develop

virtualized network functions leveraging on an advanced NFV orchestration frame-

work and proper capabilities for deterministic resource allocation in the pool. NFV

Reference Lab drives this adoption through the release of open source code, thus

encouraging software developers to explore new NFV possibilities and all this from a

well-designed and tiered architecture proposal. Its aim is to promote interoperability

and provide a more open ecosystem so that telecommunications providers adapt

and expand their network services more easily.”

– Enrique Algaba, Network Innovation and Virtualisation Director,

Telefónica I+D-Global CTO

Cyan

“ The intelligent NFV orchestration and placement PoC with Telefónica at Mobile

World Congress is a clear example of the power of collaboration as it relates to

driving real-world NFV use cases,” said Mike Hatfield, president, Cyan. “The multi-

vendor platform provides a unique framework for showcasing how Brocade’s VNF

and Telefónica’s VIM can expose performance requirements and characteristics

to Cyan’s enhanced infrastructure aware NFV orchestrator. The orchestrator will

intelligently place the VNFs on Intel servers to meet the VNF’s specific performance

needs and efficiently use compute resources to deliver end-to-end services. This is

an important issue that needs to be solved by the industry for deployment of

NFV-enhanced services at massive scale.”

– Mike Hatfield, President, Cyan

Brocade

“ Brocade welcomes the advancements in intelligent orchestration, continued partner-

ship within open initiatives and execution toward key NFV standards. The flexibility

and openness of Intel’s Network Builders Community has brought together commit-

ted partners dedicated to accelerating the industry’s transition to the New IP.

The combined efforts of partners such as Telefónica, Intel, and Cyan highlight key

architecture benefits of Brocade’s VNF platforms, the Vyatta 5600 vRouter, and

its inherent open information data model for facilitating a migration to intelligent

architectures with high performance. This also highlights the value of NFV

orchestrators and their importance to effective and optimal network deployments,

with Telefónica leading the charge to demonstrate NFV without sacrifice.”

– Robert Bays, VP of Engineering, Brocade Software Networking

Page 151: Conceptos de Data Center

References

1. https://portal.etsi.org/NFV/NFV_White_Paper.pdf

2. ETSI GS NFV-PER 001 V1.1.2 - “Network Functions Virtualisation (NFV); NFV Performance & Portability Best Practises” http://docbox.etsi.org/ISG/NFV/Open/Published/gs_NFV-PER001v010102p%20-%20Perf_and_Portab_Best_Practices.pdf

3. http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/open-network-platform-server-paper.pdf

4. https://software.intel.com/sites/default/files/managed/72/a6/OpenStack_EPA.pdf

5. http://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_NFV002v010201p.pdf

6. https://networkbuilders.intel.com/docs/Intel_Network_Builders_Directory_Sept2014.pdf

© 2015 Intel Corporation. All rights reserved. Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. * Other names and brands may be claimed as the property of others. 0215/EC/HBD/PDF 332099-001US

Intel, Brocade, Cyan, Red Hat, and Telefónica – NFV Services

Testimonials (continued)

Intel

“ Intel believes SDN-NDV is an industry inflection point and is committed to ensuring the new

network architecture transformation is built on an open architecture, using open standards

enabling an open eco system. Intel is committed to delivering NFV and is actively working

through the relevant standards and open source initiatives toward making this a reality. Intel

will makes all its ingredients open source5 though its Open Networking Platform program and

is working closely with its Netbuilders SDN-NFV ecosystem community6 partners such as Cyan,

Brocade, and Telefónica to make this a reality.”

– Rene Torres, Intel SDN-NFV Marketing Director

Red Hat

“ Building the foundation for an open NFV infrastructure requires expertise in Linux, KVM, and

OpenStack—all areas of open source where Red Hat is a leading contributor,” said Radhesh

Balakrishnan, general manager, OpenStack, Red Hat. “By collaborating on the NFV Reference

Lab, we’re not only bringing features and expertise back to the upstream OpenStack community

and our carrier-grade Red Hat Enterprise Linux OpenStack platform, but also enabling CSPs

to successfully implement their modernization plans through NFV.”

– Radhesh Balakrishnan, General Manager, OpenStack, Red Hat

Acronyms

BNG Broadband Network Gateway

BSS Business Support System

CMS Cloud Management System

CPU Central Processing Unit

vCPU Virtual Central Processing Unit

DPDK Dataplane Development Kit

EPC Evolved Packet Core

EMS Element Management System

EPA Enhanced Platform Awareness

GCTO Global Chief Technical Office

IOTLB I/O Translation Look Aside Buffer – Virtualization Technology

NiC Network Interface Card

NFV Network Function Virtualization

NFVI Network Function Virtualized Infrastructure

NFV – O Network Function Virtualization Orchestrator

NUMA Non Uniform Memory Access

OSS Operations Support System

PE Provider Edge Router

PCIe Extensible Peripheral Connect Interface Bus

QoS Quality of Service

SLA Service Level Agreement

TCO Total Cost of Ownership

VIM Virtual Infrastructure Manager

VNF Virtual Network Function

VT-d Intel® Virtualization Technology for Direct I/O