I can’t agree with this statement: “IronPython is actually doing quite a lot at startup timehttp://ironpython-urls.blogspot.com/2009/08/good-mix-20-startup-time-inline-c.html  If you are doing again and again the same thing and it affects users of the language why don’t you do it by some other way? Probably it could be possible to pre-compile the parts of CPython standard library IronPython uses. It is not a big deal to have couple more dlls required if you get 10 times faster startup time. Now for example it is not possible to write ‘unit tests’ by using IronPython. Just because ‘unit test’ is a code taking less than 10ms for  run.        

 

 

 

 

2

View comments

  1. The goal:

    You have AWS load-balancer with spot-instances. You need the engine starting spot instances automatically when existing ones die.

    Then just three steps:

    as-create-launch-config phabricator-fe-lc --image-id ami-4b8d2622 --instance-type t1.micro --spot-price 0.02

    as-create-auto-scaling-group phabricator-fe-group-d --launch-configuration phabricator-fe-lc --availability-zones us-east-1d --min-size=1 --max-size=1 --default-cooldown 180 --grace-period 240 --health-check-type ELB --load-balancers phabricator


     as-create-auto-scaling-group phabricator-fe-group-c --launch-configuration phabricator-fe-lc --availability-zones us-east-1c --min-size=1 --max-size=1 --default-cooldown 180 --grace-period 240 --health-check-type ELB --load-balancers phabricator

    Done!

    Sure you have to have load-balancer up and running :)

    Starting spot instance procedure can take time. If you want your instance up and running as fast as possible create launch configuration and auto-scaling group for OnDemand instance:

    as-create-launch-config phabricator-fe-lc-demand --image-id ami-4b8d2622 --instance-type t1.micro


    as-create-auto-scaling-group phabricator-fe-group-demand --launch-configuration phabricator-fe-lc-demand --availability-zones us-east-1d --min-size=0 --max-size=1 --default-cooldown 180 --grace-period 240 --health-check-type ELB --load-balancers phabricator

    Then create two policies up and down:

    as-put-scaling-policy on-demand-up-policy --auto-scaling-group phabricator-fe-group-demand --adjustment=1 --type ChangeInCapacity

    as-put-scaling-policy on-demand-down-policy --auto-scaling-group phabricator-fe-group-demand --adjustment=-1 --type ChangeInCapacity

    You are almost there!

    Now go to Cloud Watch Web UI and create whatever Alarm you need for starting On-Demand and Terminating On-Demand instance.

    I created for Starting: 

    HealthyHostCount < 1 for 1 minutes

    For Terminating:

    HealthyHostCount > 1 for 1 minutes

    If you need notifications when new instance is being started:

    as-put-notification-configuration phabricator-fe-group-demand -t arn:aws:sns:us-east-1:758139277749:Phabricator -n autoscaling:EC2_INSTANCE_LAUNCH


    References:


    http://aws.amazon.com/autoscaling/
    http://docs.amazonwebservices.com/AutoScaling/latest/DeveloperGuide/AS_Concepts.html


    If you change ami, do this:


    as-update-auto-scaling-group phabricator-fe-group-c --min-size 0
    as-update-auto-scaling-group phabricator-fe-group-d --min-size 0

    as-describe-auto-scaling-groups
    ... (all instances will be listed here)

    as-terminate-instance-in-auto-scaling-group i-c9a576b5 --no-decrement-desired-capacity


    as-create-launch-config phabricator-fe-lc-demand1 --image-id  ami-bcec54d5 --instance-type t1.micro

    as-update-auto-scaling-group phabricator-fe-group-demand --launch-configuration phabricator-fe-lc-demand1


    as-delete-launch-config  phabricator-fe-lc





    6

    View comments

  2. First see if it is attached with:

    >> sudo fdisk -l
    Disk /dev/xvdf: 10.7 GB, 10737418240 bytes

    Then format it (if you haven't already done it)
    >> sudo mkfs -t ext4 /dev/xvdf

    Create dir where it will be mounted:
    >>mkdir /mnt

    That's it you can mount it:
    >>sudo mount /dev/xvdf /mnt

    Check if it has been mounted correctly with:
    >>mount -l
    /dev/xvdf on /mnt type ext4 (rw)

    Make it to mount automatically on system start
    >>sudo vim  /etc/fstab

    and add this:
    /dev/xvdf       /mnt1   auto    defaults,nobootwait     0       0

    Done

    BTW by some reason AWS Console shows /dev/xvdf as /dev/sdf
    6

    View comments

  3. If you run Google Chrome with these parameters:

    --enable-logging --v=1 http://goo.gl/BL4bP
    --user-data-dir http://goo.gl/BCkal

    you get absolutely independent instance of Chrome logging a lot of interesting things and among them all calls from JavaScript to console.log, console.info, console.error functions.

    What does it mean for Acceptance Testing? Simple! If you log from JavaScript you can test your Application through Google Chrome log.

    Here is an example of how I am checking Google Chrome Log from PowerSlim http://goo.gl/NT7b2 Here is how it is filled from Java Script http://goo.gl/LLeV4 I am doing it from QUnit tests but it could easily have been done from the real App.

    BTW PowerSlim is a nice PowerShell based SLIM server to Fitnesse Acceptance Testing Framework http://goo.gl/BmZ7E
    0

    Add a comment

  4. Pivotal Tracker is suitable for almost 100% for what I need to track 5 small teams: 4 in SPb and one in Zhuhai.

    It is fast (rich JavaScript UI). It is very simple from usability perspective: rare case when you need to click twice to get what you need. For example Gravity https://www.gravitydev.com/ is not that simple: to change Story you have to Open It first and then Click Edit.

    What I use Pivotal Tracker for:

    - Release Planning: dev mangers are filling Backlog.
    - Tracking what has been done and what is being done right now
    - ETA: prediction of Release Date
    - Comparison of Teams from velocity perspective

    IMHO Pivotal Tracker has exactly right meaning of team velocity: it measure team’s velocity only by done User Stories. It is what I need. Nothing more.

    What’s missing: Google Wave Integration :) So all our User Stories Titles are duplicated in PT and Google Wave. And we discuss them in Google Wave.

    Interesting that I have PT Project per Team: that’s why it is so easy for me to compare teams. But at the same time it is in contradiction with Pivotal Tracker ideology. As far as I understand they suppose that several teams should be members of one Product. By using PT Project as Team I am not getting summary Product Burndown Chart. I am generating it myself by calling PT API and then merging data across all the teams working for Product with Python App.

    So support of teams working for one product is a second missing feature for me.

    But now when they are planning to be a paid service, big possibility I’ll start using labels and search queries for this. So I’ll be paying 7$ instead of 18$ :)
    0

    Add a comment

  5. Started reading great book ‘JavaScript Patterns’ http://goo.gl/51wTa and want to share very interesting example from there.

    The beginning is here “2.2.5. Hoisting: A Problem with Scattered vars”:

    “JavaScript enables you to have multiple var statements anywhere in a function, and they all act as if the variables were declared at the top of the function. This behavior is known as hoisting.”

    It means very simple thing: you can start using variable even if it has been declared later in the function. As it is being hoisted at interpretation/parse time (simplification), and it is ok! it just has ‘undefined’ value.

    myname = "global";

    function func() {

    console.log(myname);
    var myname = "local";
    console.log(myname);

    }
    func();

    you get:
    >>undefined
    >>local

    Just to compare with Python: in similar example there you get “NameError: global name 'myname' is not defined”

    Actually even this was surprise for me but wait... things start getting funnier when function declaration and function expression are getting involved.

    You know the difference between function declaration and function expression, right? ;)

    Just if you don’t (btw I didn’t) It is described here “4.1.2. Declarations Versus Expressions: Names and Hoisting”

    So let’s start the party! Here is slightly modified example from here “4.1.4. Function Hoisting”:

    function foo() { console.log( 'global foo' ); }
    function bar() { console.log( 'global foo' ); }

    function hoistMe() {
    foo();
    bar();
    }
    hoistMe();

    You get:
    >> global foo
    >> global foo

    Absolutely expected! Now next example:

    function hoistMeMore() {

    foo();
    bar();

    function foo() {
    console.log( 'local foo' )
    }

    var bar = function () {
    console.log( 'local bar' );
    };

    }

    hoistMeMore();

    You get:
    >>local foo
    >>TypeError: bar is not a function

    ‘local foo’ is expected, right? But why ‘Type Error’? Because! In function expression (which is different from function declaration) variable bar is being hoisted but function definition is not! That’s it!

    Enjoy the book! :)
    0

    Add a comment

  6. There is a great article describing this Network Graph. https://github.com/blog/39-say-hello-to-the-network-graph-visualizer

    I have no intent to repeat it here. Just want to say that It is very useful! It makes teams work very transparent.

    Let me give you an example from almost real life! ;)

    So we have four teams. We have one main repository on github. Every team has its own fork and implements features there. Only when feature is done and accepted tested by all teams, only then it is merged into main repository from team’s fork.

    How does this Network Graph help? Simple. It shows you where every team is and where main repository is. And it shows all this info from the perspective of the fork which Network Graph you are looking at.

    Example 1:

    You are on the main repository Network Graph. Team A has done three commits. Team B has done four commits. You see all this.

    Main ------------------------
    Team A -------------------- 1 2 3
    Team B -------------------- 1 2 3 4

    Example 2:

    You are on Team A fork. Team B merged new feature (100 commits) into the main repository. Again you see all this.

    Team A ---
    Team B --------1 2 --- 100 -----
    Main ----------------------------- 100 merge with Team B
    0

    Add a comment

  7. You’ve got all your product on github, great! You have your local clone of github repository, even better!

    But what will you do when your product is installed on testing virtual environment and you see a problem? Fix that on your dev machine in your local repository and then copy files to testing machine, and then, if it works, commit ?

    Not very efficient! You need the immediate feedback, and you get that only if you modify files directly there - on testing environment!

    But if you start doing that and make changes in 3-4 places, and then copy it back into your local repository and commit, again probably you are in trouble! Why? Because you are human being and human beings tend to forget things.

    So here is an idea - not proven by the production usage but works on test repositories. The idea is based on how git interprets changes. It identifies every file in working directory by path. And it doesn’t delete if there is no file which is in the latest commit. And it does nice merge if there is no conflict.

    Ok, closer to the body:

    You have your production repository A:

    .git
    --Folder1
    -----Folder2

    If after the installation there is the same folders tree. It is very important trees you’ll be changing in have to be the same in both repositories!

    %Install Dir%
    --Folder1
    -----Folder2

    You can do this on your dev machine:
    $cd //remote-host/Install Dir/
    $git init

    You don’t want to track everything just JavaScript files
    $git add *.js
    $git commit -m ‘first commit’

    Then you fix the problem. After changing file1.js, file2.js ….
    $git commit -a -m ‘problem fixed’

    Nothing magic so far, just simple orphaned git repository not connected to your product at all.

    Here is the fun! You go to your main repository:
    $cd /c/main
    $git remote add remote-host //remote-host/Install Dir/
    $git merge remote-host
    $git commit -a -m ‘all fixed files in the main repository!’‘

    git doesn’t bother about the same files in both repositories, it takes only changes!

    Done :)

    P.S. Ups, don’t forget about your Acceptance Tests suite! Git doesn’t run it for you! ;)













    0

    Add a comment

  8. For communicating with remote github.com repository from your local clone you need SSH Keys. You generate them, add public key to github account, and sure protect your private key with passphrase, right? ;) All these steps are described here http://help.github.com/msysgit-key-setup/

    Then you have two problems :)

    First is annoying: you have to specify passphrase every time you push, pull, fetch to/from guthub.

    Second is worse: on your builder you can’t easily automate push command after all your acceptance tests are green. And sure we all know that automation means no buttons to click, right? ;)

    So here is a cure http://help.github.com/working-with-key-passphrases/ It is very simple you just run ssh_agent which asks your passpharse once and then (as far as I understand) it reads and remembers in memory your private key.

    There is a nice script you can add to you .profile file. The only problem I had on Windows XP, the variable $SSH_ENV should have been put in quotes wherever it was used.

    Done. Here is my version:

    SSH_ENV="$HOME/.ssh/environment"

    # start the ssh-agent
    function start_agent {
    echo "Initializing new SSH agent..."
    # spawn ssh-agent
    ssh-agent | sed 's/^echo/#echo/' > "$SSH_ENV"
    echo succeeded
    chmod 600 "$SSH_ENV"
    . "$SSH_ENV" > /dev/null
    ssh-add
    }

    # test for identities
    function test_identities {
    # test whether standard identities have been added to the agent already
    ssh-add -l | grep "The agent has no identities" > /dev/null
    if [ $? -eq 0 ]; then
    ssh-add
    # $SSH_AUTH_SOCK broken so we start a new proper agent
    if [ $? -eq 2 ];then
    start_agent
    fi
    fi
    }

    # check for running ssh-agent with proper $SSH_AGENT_PID
    ps -ef | grep $SSH_AGENT_PID | grep ssh-agent > /dev/null
    if [ $? -eq 0 ]; then
    test_identities
    # if $SSH_AGENT_PID is not properly set, we might be able to load one from
    # $SSH_ENV
    else
    . "$SSH_ENV" > /dev/null
    ps -ef | grep $SSH_AGENT_PID | grep ssh-agent > /dev/null
    if [ $? -eq 0 ]; then
    test_identities
    else
    start_agent
    fi
    fi
    0

    Add a comment

  9. “No one can be part of multiple jelled teams. The tight interactions of the jelled team are exclusive. Enough fragmentation and people just won’t jell.” by Tom DeMarco, Timothy Lister

    0

    Add a comment

  10. “You can’t protect yourself against your own people’s incompetence. If your stuff isn’t up to the job, you will fail.  Of course, if the people are badly suited to the job, you should get new people. But once you’ve decided to go with a given group, your best tactic is to trust them. Any defensive measure taken to guarantee success in spite of them will only make things worse. It may give you some relief from worry in the short term, but it won’t help in the long run, and it will poison any chance for team to jell.” by Tom DeMarco, Timothy Lister

    0

    Add a comment

Blog Archive
About Me
About Me
Loading