30 July 2015

STUN


STUN
STUN (Session Traversal Utilities for NAT) is a standardized set of methods and a network protocol to allow an end host to discover its public IP address if it is located behind a NAT. It is used to permit NAT traversal for applications of real-time voice, video, messaging, and other interactive IP communications. It is documented in RFC 5389. The STUN URI scheme is documented in RFC 7064. STUN is intended to be a tool to be used by other protocols, such as ICE.


Stun client:
A Python STUN client for getting NAT type and external IP


Instalasi dan operasi:

hSammir@hSammir-PC ~
$ pip install pystun
You are using pip version 6.0.8, however version 7.1.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting pystun
  Downloading pystun-0.1.0.tar.gz
Installing collected packages: pystun
  Running setup.py install for pystun
    Installing pystun script to /usr/bin
Successfully installed pystun-0.1.0

hSammir@hSammir-PC ~
$ pystun
NAT Type: Full Cone
External IP: 36.68.30.220
External Port: 11005

hSammir@hSammir-PC ~
$ pystun -H stun.bahnhof.net -P 3478
NAT Type: Full Cone
External IP: 36.68.30.220
External Port: 11196

Daftar stun server publik:

Cross check dengan https://www.whatismyip.com/

25 July 2015

Replikasi XtreemFs


XtreemFS offers replication of all data. On the one hand, the Directory Service (DIR) and the Metadata Catalog (MRC) are replicated at database level. On the other hand, files are replicated on the OSDs with read/write or with read-only replication. In this chapter, we describe how these replication mechanisms work, their requirements and potential use-cases.


Konsekuensi dari replikasi adalah beban pada jaringan saat terjadi propagasi data antar server sehingga sebaiknya replikasi (dengan model read/write) hanya melibatkan maksimal 10 replika.

Dir dan Mrc juga dapat direplikasi. Saya pikir hal ini yang saya butuhkan dalam penelitian saya.

6.3  MRC and DIR Replication
Aside from file replication across OSDs, XtreemFS also supports MRC and DIR replication to increase data safety. MRC replication covers all file system metadata, whereas DIR replication covers configuration information of services as well as volumes.
6.3.1  Technical Details
DIR and MRC replication rely on the same principle as read-write replication of files. A primary replica, which is distinguished by means of a lease, accepts all updates and disseminates these to all backup replicas in the same order. When the primary fails, the lease will eventually expire and one of the former backup replicas can become primary. Unlike file replication, which may involve a different set of OSDs for each file, an MRC or DIR replicates its entire database. A replicated MRC or DIR consists of at least two individual server instances. Note that you will need three or more instances to be able to transparently recover from failures, as a majority of replicas always needs to be available to make progress.


To enable database replication across a set of DIR or MRC instances, it is necessary to enable replication and configure its parameters. This needs to be done prior to starting up the services. The basic steps are the following:
  • Enable the replication plug-in on all replicated MRC/DIR instances
  • Configure replication parameters across all instances
  • Start up all replicated MRC/DIR instances




Teknik replikasi dapat dibaca pada URL referensi di atas. Cukup sederhana dan sepertinya dapat diikuti dengan mudah.

16 July 2015

LUA

Bermain-main dengan nginx membawa saya kepada Lua. Bahasa scripting yang mudah dan cepat dipelajari dan dapat di-embed ke aplikasi.

Lua is designed to be a lightweight embeddable scripting language and is used for all sorts of applications from games to web applications and image processing.
Referensi Lua yang saya gunakan sejauh ini adalah: http://tylerneylon.com/a/learn-lua/ sedangkan untuk operasi io saya menggunakan referensi ini: http://www.lua.org/pil/21.1.html .

Interpreter Lua saya akses menggunakan Cygwin.


11 July 2015

Operasi (Create, Delete, List) XtremeFs Volume.


Volume XtreemFs dibuat pada MRC server (meta data & replica catalog server). Operasi volume adalah sebagai berikut:

4.2  Volume Management
Like many other file systems, XtreemFS supports the concept of volumes. A volume can be seen as a container for files and directories with its own policy settings, e.g. for access control and replication. Before being able to access an XtreemFS installation, at least one volume needs to be set up. This section describes how to deal with volumes in XtreemFS.
4.2.1  Creating Volumes
Volumes can be created with the mkfs.xtreemfs command line utility. Please see mkfs.xtreemfs --help or man mkfs.xtreemfs for a full list of options and usage.
When creating a volume, you can specify the authorization policy (see Sec. 7.2) with the option --access-control-policy (respectively -a). If not specified, POSIX permissions/ACLs will be chosen by default. Unlike most other policies, authorization policies cannot be changed afterwards.
In addition, it is possible to set a default striping policy (see Sec. 7.4). If no per-file or per-directory default striping policy overrides the volume's default striping policy, the volume's policy is assigned to all newly created files. If no volume policy is explicitly defined when creating a volume, a RAID0 policy with a stripe size of 128kB and a width of 1 will be used as the default policy.
A volume with the default options (POSIX permission model, a stripe size of 128 kB and a stripe width of 1 (i.e. all stripes will reside on the same OSD)) can be created as follows:
$> mkfs.xtreemfs my-mrc-host.com/myVolume
Creating a volume may require privileged access, which depends on whether an administrator password is required by the MRC. To pass an administrator password, add --admin_password <password> to the mkfs.xtreemfs command.
For a complete list of parameters, please refer to mkfs.xtreemfs --help or the man mkfs.xtreemfs man page.
4.2.2  Deleting Volumes
Volumes can be deleted with the rmfs.xtreemfs tool. Deleting a volume implies that any data, i.e. all files and directories on the volume are irrecoverably lost! Please see rmfs.xtreemfs --help or man rmfs.xtreemfs for a full list of options and usage. Please also note that rmfs.xtreemfs does not dispose of file contents on the OSD. To reclaim storage space occupied by the volume, it is therefore necessary to either remove all files from the volume before deleting it, or to run the cleanup tool (see Section 5.2.2).
The volume myVolume residing on the MRC my-mrc-host.com (listening at the default port) can e.g. be deleted as follows:
$> rmfs.xtreemfs my-mrc-host.com/myVolume
Volume deletion is restricted to volume owners and privileged users. Similar to mkfs.xtreemfs, an administrator password can be specified if required.
4.2.3  Listing all Volumes
A list of all volumes can be displayed with the lsfs.xtreemfs tool. All volumes hosted by the MRC my-mrc-host.com (listening at the default port) can be listed as follows:
$> lsfs.xtreemfs my-mrc-host.com
The listing of available volumes is restricted to volume owners and privileged users. Similar to mkfs.xtreemfs, an administrator password can be specified if required.

07 July 2015

Catatan Menjalankan Multiple OSD Pada Satu Mesin.


OSD adalah salah satu server XtreemFs yang berfungsi sebagai  storage server. Idealnya OSD di-install pada mesin yang berbeda. Namun untuk keperluan uji coba OSD dapat di-install pada satu mesin yang sama.

Kriteria menjalankan mejalankan satu OSD pada satu mesin yang sama:
To replicate a file, you need a setup with at least two OSDs. Since XtreemFS uses majority voting, a fault-tolerant setup requires at least three replicas. For testing, you can run multiple OSDs on the same machine, just make sure that they use differen ports (http_port and listen.port in osdconfig.properties).


Referensi  menjalankan multiple OSD:
3.3.2  Running multiple OSDs per Machine
Running more than one OSD service per host might be useful in various situations. Use cases for this might be machines with more than one disk as an alternative to a local RAID or testing purposes. We offer an extended init.d script, named xtreemfs-osd-farm, to start or stop a set of OSDs on one host by a single script.

The xtreemfs-osd-farm script can be found in the /usr/share/xtreemfs directory, if XtreemFS is installed by the provided packages, or in the contrib directory of the XtreemFS GIT repository.

Using the xtreemfs-osd-farm script demands two steps. First, a list names for all of the used OSDs hat to be set to the OSD_INSTANCES variable in the script. The list elements have to be separated by spaces. In the second step, a configuration file with the name <osdname>.config.properties has to be created in /etc/xos/xtreemfs for all of the defined OSD names, whereas <osdname> has to be replaced by the particular OSD name. After these steps, the init.d script can be executed with the usual arguments start, stop, status, restart, and try-restart. A single OSD can be controlled by xtreemfs-osd-farm <osdname> <argument>.


Arahan menjalankan multiple OSD per machine di google group XtreemFs.:
Michael Berlin
2/13/13
Dear Russ,

I once posted a init.d script which allows you to start and stop
multiple OSDs with different configuration files:

I still use this script for one of our internal installations - however,
it is not updated to the latest changes of the init.d script, i.e. it
may require some more testing and minor modifications.

The multiple OSD configuration files must differ in the following settings:

a) uuid =

Every OSD needs a unique UUID. btw: they don't have to be randomized,
only unique and must never change - for example, sometimes I set the
UUID just to "[installation-name]-osd1", [installation-name]-osd2" and
so on. This way you can also use the UUIDs to keep track of the
different machines.

b) listen.port =

Port of the OSD.

c) http_port =

Port for the Webinterface.

d) object_dir =

Place where the data is stored.

Best regards,
Michael


05 July 2015

Catatan XtreemFs.

XtreemFS is a fault-tolerant distributed file system for all storage needs.

Kumpulan sumber daya dalam mengimplementasikan xtreemfs.

Quick start:

Pada dasarnya deploy xtreemfs cukup mudah dilakukan pada ubuntu server. Tinggal mengikuti langkah-langkah pada URL di atas, namun ada aktifitas lain yang membutuhkan referensi tambahan.

Menambahkan user ke dalam group. Contohnya menambahkan user ke dalam group fuse. Pada ubuntu penggun fuse harus berada dalam group 'fuse'.

Review fuse (file system in user space)

Informasi seputar loadable kernel module. Dibutuhkan untuk menambahkan module fuse ke sistem linux.