Pages

How to install Google Chrome in Ubuntu

chrome-stable is available on 3rd Party Repository

Follow the instruction for installation:

1:Add Key

wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add - 

2: Set repository

sudo sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'

3. Install Package:

sudo apt-get update
sudo apt-get install google-chrome-stable

What are the implications of changing socket buffer sizes?

We will discuss What are the implications of changing the values of the following parameters 
/proc/sys/net/core/rmem_default = 524288
/proc/sys/net/core/rmem_max = 524288
/proc/sys/net/core/wmem_default = 52428
/proc/sys/net/core/wmem_max = 524288
  • Increasing the rmem/wmem will increase the buffer size allocated to every socket opened on the system. These values need to be tuned as per your environment and requirements. A higher value may increase throughput to some extent, but will affect latency. So, you need to determine which is important for you, and a value can only be arrived by repetitive testing.

  • When buffering is enabled, a packet received is not immediately processed by the receiving application. With a large buffer this delay gets increased, as the packet has to wait for the buffer backlog to be emptied before it gets it's turn for processing.

  • Buffering is good to increase throughput, because by keeping the buffer full, the receiving application will always have data to process. But, this affects latency, as packets have to wait longer in the buffer before being processed. For more information on this also visit: Bufferbloat: http://en.wikipedia.org/wiki/Bufferbloat


kernel: Out of socket memory

Solution for this is to increase the TCP memory. This can be done by adding the following parameters to /etc/sysctl.conf.
net.core.wmem_max=12582912
net.core.rmem_max=12582912
net.ipv4.tcp_rmem= 10240 87380 12582912
net.ipv4.tcp_wmem= 10240 87380 12582912
These figures are just an example and need to be tuned per system basis. On the similar lines tcp_max_orphans sysctl variable value can be increased but it has memory overhead of ~64K per orphan entry and needs careful tuning.

  • For more information on tuning socket buffers refer to: How to tune the TCP Socket Buffers?

There are three factors which may cause the problem,
  1. The networking behavior of your system. for example, how many TCP socket created on your system.
  2. How much system RAM in your system.
  3. The following two system kernel parameters.
/proc/sys/net/ipv4/tcp_mem
/proc/sys/net/ipv4/tcp_max_orphans

an example in 1GB RAM system
# cat /proc/sys/net/ipv4/tcp_max_orphans 
32768

# cat /proc/sys/net/ipv4/tcp_mem 
98304     131072     196608
The meaning of the two kernel parameters,
  1. tcp_max_orphans -- Maximal number of TCP sockets not attached to any user file handle held by system. If this number is exceeded orphaned connections are reset immediately and warning is printed. The default value of this parameter on RHEL5.2 is 32768.
  2. tcp_mem -- vector of 3 INTEGERs: min, pressure, max.
  • min: below this number of pages TCP is not bothered about its memory appetite.
  • pressure: when amount of memory allocated by TCP exceeds this number of pages, TCP moderates its memory consumption and enters memory pressure mode, which is exited when memory consumption falls under "min". The memory pressure mode presses down the TCP receive and send buffers for all the sockets as much as possible, until the low mark is reached again.  
  • max: number of pages allowed for queuing by all TCP sockets.

If the number of orphan socket is more than the value of tcp_max_orphans, there may trigger the messages "kernel: Out of socket memory".

If the total number of memory page which are assigned to all the system TCP socket is more than the max value of tcp_mem., there may trigger the messages "kernel: Out of socket memory".

Both the situations above will trigger the messages "kernel: Out of socket memory".


Logic behind killing processes during an Out of Memory situation


A simplified explanation of the OOM-killer logic follows.
A function called badness() is defined to calculate points for each processes. Points are added to:

  • Processes with high memory usage
  • Niced processes

Badness points are subtracted from:

  • Processes which have been running for a long time
  • Processes which were started by superusers
  • Process with direct hardware access

The process with the highest number of badness points will be killed, unless it is already in the midst of freeing up memory on its own. (Note that if a processes has 0 points it can not be killed.)

The kernel will wait for some time to see if enough memory is freed by killing one process. If enough memory is not freed, the OOM-kills will continue until enough memory is freed or until there are no candidate processes left to kill. If the kernel is out of memory and is unable to find a candidate process to kill, it panics with a message like:

How much memory is use by TCP/UDP across the system?


Socket memory usage is visible in /proc/net/sockstat.

sockets: used 870
TCP: inuse 21 orphan 0 tw 0 alloc 28 mem 10
UDP: inuse 9 mem 6
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
This shows that the system currently using 10 pages for TCP sockets and 6 pages for UDP sockets. Note that the low, threshold and high settings in net.ipv4.tcp_mem are set in pages too.