Friday, December 30, 2011

Getting further with Dialyzer

So in the previous post, I was able to compile modules in a way in which they could be analyzed with Dialyzer:

I installed erlang myself from source, so I set my ERL_TOP to where I built it from:
philip@desktop:~/s_server/src$ export ERL_TOP=~/Packages/otp




Now to built a PLT (Persistent Lookup Table).  I only include erlang applications which my application depends on:

philip@desktop:~/s_server/src$ dialyzer --build_plt -r . $ERL_TOP/lib/stdlib/ebin $ERL_TOP/lib/kernel/ebin


This took about 12 min for me using a quite old machine (P4 2.6 GHz).



Now I create my own PLT which is a combination of the previous PLT plus the PLT generated from my own code:

philip@desktop:~/s_server/src$ dialyzer --add_to_plt -r . --output_plt s_server.plt


Finally I can analyse my own code which is in my current directory:

philip@desktop:~/s_server/src$ dialyzer --plt s_server.plt -r .
  Checking whether the PLT s_server.plt is up-to-date... yes
  Proceeding with analysis...
s_server_tests.erl:14: The variable __V can never match since previous clauses completely covered the type 'true'
s_server_tests.erl:16: The variable __V can never match since previous clauses completely covered the type 'true'
s_server_tests.erl:48: The variable _ can never match since previous clauses completely covered the type 'false'
Unknown functions:
  eunit:test/1
 done in 0m1.17s
done (warnings were emitted)

The warnings which I received were in the eunit macros, and not in the actual code which I wanted to analyse.  It would be nice if there was a way to suppress these.

First steps with dialyzer

Today I tried to use dialyzer on my s_server module.  Don't  worry if you have not read any of my posts about this module, it doesn't do any thing useful !

At first I tried to analyse my test module:

philip@desktop:s_server/src$ erlc +debug_info s_server_tests.erl
philip@desktop:s_server/src$ dialyzer -c s_server_tests.erl --build_plt


Which gave me this result:

dialyzer: {dialyzer_error,"Byte code compiled with debug_info is needed to build the PLT"}
[{dialyzer_options,check_output_plt,1,
                   [{file,"dialyzer_options.erl"},{line,86}]},
 {dialyzer_options,postprocess_opts,1,
                   [{file,"dialyzer_options.erl"},{line,75}]},
 {dialyzer_options,build,1,[{file,"dialyzer_options.erl"},{line,63}]},
 {dialyzer_cl_parse,cl,1,[{file,"dialyzer_cl_parse.erl"},{line,218}]},
 {dialyzer_cl_parse,start,0,[{file,"dialyzer_cl_parse.erl"},{line,46}]},
 {dialyzer,plain_cl,0,[{file,"dialyzer.erl"},{line,60}]},
 {init,start_it,1,[]},
 {init,start_em,1,[]}]


The problem actually was that I should have given the beam files to dialyzer to analyse e.g.

philip@desktop:s_server/src$ dialyzer -c s_server_tests.beam --build_plt
  Creating PLT /home/philip/.dialyzer_plt ...
Unknown functions:
  eunit:test/1
  s_server:ping/0
  s_server:start_link/0
  s_server:stop/0
 done in 0m0.40s
done (passed successfully)

Wednesday, November 9, 2011

An Erlang Application in 5 minutes

Introduction
The purpose of this tutorial is to get an erlang application up and running with as little work as possible.  The application will consist of 1 supervisor which monitors a simple server (the appliation will be called s_server).

There will be 1 worker process with a gen_server behaviour.  When this process receives a ping, it will respond with a pong.  

Kind of like the "hello world" of Erlang applications !

Vim Setup
Yes, I'm going to use vim to speed things up, why break a 20 year old habbit ?

I installed  vim-erlang-skeletons from https://github.com/aerosol/vim-erlang-skeletons.git.  It gives you well documented complete skeletons of erlang behaviours.  If you don't want to use vim, then you can just google for a copy of the relevant skeleton.




Installing rebar
I'm still new to rebar, but it's definately a great tool to have to speed up writing erlang applications.  It's going to save us quite a bit of manual work here.

$ mkdir -p ~/Programming/Erlang
$ cd ~/Programming/Erlang
$ git clone https://github.com/basho/rebar.git
$ cd rebar && make


Create a new directory for the application
The application will be called s_server:
$ mkdir ~/Programming/Erlang/s_server
$ cd ~/Programming/Erlang/s_server

Then copy the rebar executable into the myapp project dir
$ cp ../rebar/rebar .


Creating an OTP App
$ ./rebar create-app appid=s_server

The app directory then looks like this:
s_server
|-- rebar
`-- src
    |-- s_server_app.erl
    |-- s_server.app.src
    `-- s_server_sup.erl


$ ./rebar compile


This creates the ebin directory with the compiled code as well as the application specification.

s_server
|-- ebin
|   |-- s_server.app
|   |-- s_server_app.beam
|   `-- s_server_sup.beam
|-- rebar
`-- src
    |-- s_server_app.erl
    |-- s_server.app.src
    `-- s_server_sup.erl


The app spec in ebin/myapp.app is created from the template in
s_server/src/s_server.app.src

You just finished making an OPT application !

Starting the Application
Start an erlang shell and add the ebin to it's path:
$ erl -pa ebin

From the erlang shell
1> application:start(s_server).

Now the application is running.

Of course you didn't write any worker process, so the application does nothing, but you can use appman to check that the application is running:
2> appman:start()

When you are done, leave the erlang shell
3> q().

Writing a worker process for the s_server application
Start up vim and in vim type :ErlServer to get the skeleton of a gen_server behaviour.

Enter :w src/s_server.erl to save it (and you can compile here also if you wish just to check that everything is still fine).

In the first line of code, set the correct module name
-module(s_server).

Also useful is to add under the module definition:
-compile([export_all, debug_info]).


In the API section, add the following code:

ping() ->
        gen_server:call(ping).


In the callbacks section, delete the existing handle_call skeleton and replace it with:
handle_call(ping, _From, State) ->
        Reply = pong,
        {reply, Reply, State}.

Getting the supervisor module ready
From within vim open up src/s_server_sup.erl.  Delete all existing code and type :ErlSupervisor to replace the current code with a better template.

In the init function, you need to replace AModule with the name of the module which contains our code for the gen_server i.e. s_server:
%s/AModule/s_server/g


Start the Application

Save the file and compile with:
$ ./rebar compile

Startup the erlang shell
$ erl -pa ebin

Start the s_server application
1> application:start(s_server).
ok

Now test it out:
2> s_server:ping().
pong

And to prove that the supervisor is restarting the server when it crashes, try to kill it:

First get the process number of the s_server from the output of regs().

3> regs().
** Registered procs on node nonode@nohost **
Name                  Pid          Initial Call                      Reds Msgs
:
s_server              <0.39.0>     s_server:init/1                     32    0

Then send the exit signal to it;
4> exit(pid(0, 39, 0), kill).

You can then check again from the regs(). command the process id of s_server.  This should now be different as the exit signal restarted it.


Thursday, October 13, 2011

where is mysql persisted to ?

quick answer:  The files are persisted to the directory specified by mysql's 'datadir' variable.

To get the current value do:

- start up mysql
mysql -u -p

- use the SHOW VARIABLES command
mysql> show variables;

or like this:

mysql> show variables like 'datadir';
+---------------+-----------------+
| Variable_name | Value           |
+---------------+-----------------+
| datadir       | /var/lib/mysql/ |
+---------------+-----------------+

If you cd into the datadir, here you can find there is on directory per database and inside of each database directory, each table has it's own .frm file

MySql5 uses innodb
http://en.wikipedia.org/wiki/InnoDB

as the storage engine which is creating these files.







Friday, September 16, 2011

erlang how to fix "Can't set long node name"

For this post, my platform is Ubuntu 10.10

# Start up an erlang node with a long name as follows:
erl -name mynode


# This results in a long stack trace from erlang and the first line says:
{error_logger,{{2011,9,16},{18,1,5}},"Can't set long node name!\nPlease check your configuration\n",[]}

# The problem is the way your hostname is set. e.g.

philip@myserver:$ hostname
myserver

# now you can do
sudo hostname myserver.mydomainname.com

#next time you enter hostname you should get
myserver.mydomainname.com


# In order to make this permanent, you need to edit your /etc/hostname file and change the hostname from
myserver

# to be something like
myserver.mydomainname.com



# and now you can start the erlang node
philip@myserver:$ erl -name mynode
Erlang R13B03 (erts-5.7.4) [source] [smp:4:4] [rq:4] [async-threads:0] [hipe] [kernel-poll:false]

Eshell V5.7.4  (abort with ^G)
(mynode@myserver.mydomainname.com)1>


Django South: Changing a field from null = True to null = False

# We have this table
mysql> show columns from myapp_mymodel where Field = "my_field_id";

+-------------+---------+------+-----+---------+-------+
| Field       | Type    | Null | Key | Default | Extra |
+-------------+---------+------+-----+---------+-------+
| my_field_id | int(11) | YES  | MUL | NULL    |       |
+-------------+---------+------+-----+---------+-------+
1 row in set (0.00 sec)


# We want to change this table so that column Null is NO for Field my_field_id


# First create an empty migration
bin/django schemamigration myapp mymodel_my_field_cannot_be_null --empty



#Then add the forward migration:
    def forwards(self, orm):
        #db.alter_column(table_name, column_name, field, explicit_name=True)
        db.alter_column('myapp_mymodel', 'my_field_id', models.ForeignKey(orm['myapp.MyModel'], null = False), explicit_name=True)


When this is run you get the following code executed:
DEBUG:django.db.backends:(0.000) SET FOREIGN_KEY_CHECKS=0;; args=()
DEBUG:south:south execute "
            SELECT kc.constraint_name, kc.column_name
            FROM information_schema.key_column_usage AS kc
            JOIN information_schema.table_constraints AS c ON
                kc.table_schema = c.table_schema AND
                kc.table_name = c.table_name AND
                kc.constraint_name = c.constraint_name
            WHERE
                kc.table_schema = %s AND
                kc.table_catalog IS NULL AND
                kc.table_name = %s AND
                c.constraint_type = %s
        " with params "['django', 'myapp_mymodel', 'FOREIGN KEY']"
DEBUG:django.db.backends:(0.111)
            SELECT kc.constraint_name, kc.column_name
            FROM information_schema.key_column_usage AS kc
            JOIN information_schema.table_constraints AS c ON
                kc.table_schema = c.table_schema AND
                kc.table_name = c.table_name AND
                kc.constraint_name = c.constraint_name
            WHERE
                kc.table_schema = django AND
                kc.table_catalog IS NULL AND
                kc.table_name = myapp_mymodel AND
                c.constraint_type = FOREIGN KEY
        ; args=['django', 'myapp_mymodel', 'FOREIGN KEY']
DEBUG:south:south execute "ALTER TABLE `myapp_mymodel` ;" with params "[]"
DEBUG:django.db.backends:(0.000) ALTER TABLE `myapp_mymodel` ;; args=[]
DEBUG:south:south execute "ALTER TABLE `myapp_mymodel` MODIFY `my_field_id` integer NOT NULL;;" with params "[]"
DEBUG:django.db.backends:(0.075) ALTER TABLE `myapp_mymodel` MODIFY `my_field_id` integer NOT NULL;;; args=[]
DEBUG:south:south execute "ALTER TABLE `myapp_mymodel` ALTER COLUMN `my_field_id` DROP DEFAULT;" with params "[]"
DEBUG:django.db.backends:(0.067) ALTER TABLE `myapp_mymodel` ALTER COLUMN `my_field_id` DROP DEFAULT;; args=[]
DEBUG:south:south execute "ALTER TABLE `myapp_mymodel` ADD CONSTRAINT `my_field_id_refs_id_15e652d5` FOREIGN KEY (`my_field_id`) REFERENCES `myapp_my_field` (`id`);" with params "[]"
DEBUG:django.db.backends:(0.283) ALTER TABLE `myapp_mymodel` ADD CONSTRAINT `my_field_id_refs_id_15e652d5` FOREIGN KEY (`my_field_id`) REFERENCES `myapp_my_field` (`id`);; args=[]
DEBUG:south:south execute "SET FOREIGN_KEY_CHECKS=1;" with params "[]"
DEBUG:django.db.backends:(0.000) SET FOREIGN_KEY_CHECKS=1;; args=[]
DEBUG:django.db.backends:(0.000) SELECT `south_migrationhistory`.`id`, `south_migrationhistory`.`app_name`, `south_migrationhistory`.`migration`, `south_migrationhistory`.`applied` FROM `south_migrationhistory` WHERE (`south_migrationhistory`.`app_name` = myapp  AND `south_migrationhistory`.`migration` = 0006_mymodel_my_field_cannot_be_null ); args=('myapp', '0006_mymodel_my_field_cannot_be_null')
DEBUG:django.db.backends:(0.000) INSERT INTO `south_migrationhistory` (`app_name`, `migration`, `applied`) VALUES (myapp, 0006_mymodel_my_field_cannot_be_null, 2011-09-16 14:37:50); args=('myapp', '0006_mymodel_my_field_cannot_be_null', u'2011-09-16 14:37:50')




# and the backward migraion is
    def backwards(self, orm):
        db.alter_column('myapp_mymodel', 'my_field_id', models.ForeignKey(orm['myapp.MyModel'], null = True), explicit_name=True)

Wednesday, August 31, 2011

Running the Thrift Tutorial with Python

See the previous post about installing thrift.


We will assume that you are currently in the thrift directory.
The tutorial.thrift file is written very well with lots of useful comments, but here are the few lines of code in this file which we really need:

namespace cpp tutorial
namespace java tutorial
namespace php tutorial
namespace perl tutorial

enum Operation {
  ADD = 1,
  SUBTRACT = 2,
  MULTIPLY = 3,
  DIVIDE = 4
}
struct Work {
  1: i32 num1 = 0,
  2: i32 num2,
  3: Operation op,
  4: optional string comment,
}


exception InvalidOperation {
  1: i32 what,
  2: string why
}

service Calculator extends shared.SharedService {

   void ping(),
   i32 add(1:i32 num1, 2:i32 num2),
   i32 calculate(1:i32 logid, 2:Work w) throws (1:InvalidOperation ouch),
   oneway void zip()

}
 
 
Copy the files tutorial.thrift and shared.thrift into a new directory 
Generate the python files
thrift -r --gen py:new_style tutorial.thrift
 
where
-r makes thrift generate included files
--gen py:new_style makes thrift use the py generator with the optional argument of new_style to generate new style classes
 
 
Looking at gen-py/tutorial/Calculator.py you can see that the Calculator Service has been turned into an interface (class Iface).
 
The Iface class is inherited by class Client and class Processor
 
The Client implements the methods defined by the interface.
e.g. the add method does 2 things, calls send_add and returns recv_add.
 
The send_add method will write the name of the method 'add' and its arguments to the Thrift transport.
The recv_add method will receive the result of the add method.
 
In addition to the Client, there is a class called Processor who also implements Iface.
In the case of the add method, the Processor will read the arguments, call the handler who does the add operation and writes the result.
The handler is the important part as this is that class which you will write which implements the methods.



Installing Thrift

I'm doing this on ubuntu 10.10

First the following dependencies need installed:

sudo apt-get install libboost-dev libboost-test-dev libboost-program-options-dev libevent-dev automake libtool flex bison pkg-config g++
 

While that is going on you can download the latest thrift:
svn co http://svn.apache.org/repos/asf/thrift/trunk thrift

This creates the directory called thrift which you should cd into.

Next run
./bootstrap.sh
./configure
make
sudo make install

In the next post, I will look at running the tutorial which is found in the tutorial directory.