Wednesday 13 February, 2013

Perforce New and Old change list number map


Perforce is by default renames the change list to latest change list number while submitting.
We are doing reviews based on the pending change list number.

Issue comes when we try to generate the report using submitted change list number as those will not match to one in the reviews as pending change lists are renamed to new ones.

Issue is resolved by generating the new and old change list number map and doing little bit of scripting around it.

Following command is used to fetch the New and Old change list for given depot path in pipe separated file.

p4 -ztag changes //depot/some/path/... |grep -i "change " |awk '{if($0~/oldChange/) {print a"|"$0"|"} a=$0}'|sed -e 's/\.\.\. change //g' -e 's/|\.\.\. oldChange /|/g' > changelist_map.csv


Let me explain how it works
p4 -ztag changes //depot/some/path/...

Above command give output like
... change 198325
... time 1360723311
... user test
... client test_test-W7
... status submitted
... changeType public
... path //depot/some/path/...
... oldChange 198289
... desc test desc

... change 198270
... time 1360673907
... user test1
... client test1_1780
... status submitted
... changeType public
... path //depot/some/other/path/...
... oldChange 198256
... desc some other test

grep -i "change "

Above command make the output as below
... change 198325
... oldChange 198289
... change 198270
... oldChange 198256

awk '{if($0~/oldChange/) {print a"|"$0"|"} a=$0}'

Above command make the output as below
... change 198325|... oldChange 198289|
... change 198270|... oldChange 198256|

sed -e 's/\.\.\. change //g' -e 's/|\.\.\. oldChange /|/g'

Above command makes the output as below
198325|198289|
198270|198256|

> output redirection creates the new file named changelist_map.csv and store the above output.

By using the above mapping we are able to generate the report even if the change list numbers are changed.
 

Monday 7 July, 2008

split each line from a file as a new file

Using awk if you would like to split each record/line into individual file with first line as header for all files, use below awk script.


#start code
BEGIN{
filename_counter = 10000;
}
{
if(FNR == 1)
header=$0
else
{
fname=filename_counter".txt"
print header > fname
print $0 >> fname
filename_counter++
}
}

#end Code


copy the above code and save as split_file.awk.

Run the script as below.

nawk -f split_file.awk file1

simple mapping of two files -> map.awk

How to map to files using awk.


# save the below code to a file named as "map.awk"
# start from here
BEGIN{
FS=OFS="|";
while(getline<ARGV[0]>1)
{
arr[$1]=$0
}
delete ARGV[0];
}
{
if($5 in arr)
print $0 FS arr[$5]
else
print $0 FS
}
#end here


Run the file as below
nawk -f map.awk file1 file2 > file3

File3 will have all the rows from file2 along with column1 of the file1 where ever column5 of file2 matches.

Thursday 12 July, 2007

Shell: Find all files recursively with "pattern"

Below is the single command to find all files recursively for a particular pattern.
Here find command is executing the grep and passing file name with path where curly braces ({}) are placed.

$ find . -type f -exec grep -il "pattern" {} \;

Here grep command is searching for the pattern in all the files given as argument, but files are given by find command using back tic(``)

$ grep -il "pattern" `find . -type f`

Below command is very much useful when number files to search in exceeds 256.
xargs will give 10 files at a time to grep.

$ find . -type f|xargs grep -il "pattern"

Wednesday 11 July, 2007

Perl: Convert a text file to microsoft excel sheet

Below is simple script to convert a text file into a excel file.
Spreadsheet::WriteExcel must be available in perl lib, before running this OR you can set your PERL5LIB environment variable to point to your local lib where you have Spreadsheet::WriteExcel



#!/usr/local/bin/perl -w

use strict;
use Spreadsheet::WriteExcel;


if($#ARGV ne 1)
{
print "\n Usage: txt2xls \n Example: txt2xls \"|\" *.psv\n\n";
}

my $token;
my $file;
my $del;
my $wb;
my @files = @ARGV[1..$#ARGV];

foreach $file (@files){
open (TXTFILE, "$file") or die;
my $wb = Spreadsheet::WriteExcel->new("$file.xls");
my $excel = $wb->addworksheet();
my $row = 0;
my $col;

while (<TXTFILE>) {
chomp;

if ($ARGV[0] =~ /\|/)
{
$del="\\|";
}
else
{
$del = $ARGV[0];
}

my @Fld = split(/$del/, $_);

$col = 0;
foreach $token (@Fld) {
$excel->write($row, $col, $token);
$col++;
}
$row++;
}
}
##########################################################

How to run
copy the contents from start script to end script and place in a file named txt2xls

$ chmod a+x txt2xls
$ copy the txt2xls to bin
$ txt2xls "|" filename
$ ls
filename filename.xls

Shell: csd - cut sort duplicate

Below is simple script to cut the specified field ("|" separated) from flat file and display duplicate values.

###### Start Script #############
#!/bin/sh

if [ $# -lt 1 ]
then
echo "Usage: $0 <filed_number> <filename>"
exit;
fi

cut -d"|" -f$1 $2|sort |uniq -d

###### End Script ###############

How to run
copy the contents from start script to end script and place in a file named csd

$ chmod a+x csd
$ copy the csd to bin
$ csd 1,3 filename
523|523
e23|e23
r23|r23
t23|t23
y23|y23

Monday 25 June, 2007

Shell: cfs - cut sort unique

Below is simple script to cut the specified field ("|" separated) from flat file, sort unique and display.

###### Start Script #############
#!/bin/sh

if [ $# -lt 1 ]
then
echo "Usage: $0 <filed_number> <filename>"
exit;
fi

cut -d"|" -f$1 $2|sort -u

###### End Script ###############

How to run
copy the contents from start script to end script and place in a file named cfs

$ chmod a+x cfs
$ copy the cfs to bin
$ cfs 1,3 filename
523|523
e23|e23
r23|r23
t23|t23
y23|y23

Shell: csu - cut sort unique with count

Below is simple script to cut the specified field ("|" separated) from flat file, sort unique and displays with count.


###### Start Script #############
#!/bin/sh

if [ $# -lt 1 ]
then
echo "Usage: $0 <filed_number> <filename>"
exit;
fi

cut -d"|" -f$1 $2|sort |uniq -c

###### End Script ###############

How to run
copy the contents from start script to end script and place in a file named csu

$ chmod a+x csu
$ copy the csu to bin
$ csu 2,3 filename
12 523|install_Mac.pdf
12 e23|install_Mac.pdf
12 r23|install_Mac.pdf
10 t23|install_Mac.pdf
2 t23|install_Win.pdf
11 y23|install_Mac.pdf
1 y23|install_Win.pdf

Perl: export table to flat file

Below is the script to extract a table from the mysql database to a flat file.

You need to change the DB_NAME, USERNAME and PASSWORD.

##### Start Script #############
#!/usr/bin/perl -w

use strict;
use DBI;
use DBD::mysql;


my ($dbh, $sql, $sth, $re, $header, $table);

if ($#ARGV ne 0)
{
print "Usage: $0 < TABLE_NAME>\n";
exit;
}

$table=$ARGV[0];

$dbh=DBI->connect('dbi:mysql:DB_NAME','USERNAME','PASSWORD',{AutoCommit => 0, RaiseError => 1}) or die "Unable to connect to DB_NAME: $dbh->err\n";

$sql=<<_KAL_;

select * from $table
_KAL_

$sth = $dbh->prepare($sql) or die "Unable to prepare $sql\n";

$sth->execute();

open(KAL, ">$table.dat") or die "Unable to open $table.dat for writing\n";
$header=$sth->{NAME};
print KAL join('|', @$header), "|\n";
while($re=$sth->fetchrow_arrayref())
{
print KAL join('|', @$re), "|\n";
}
close(KAL);

$sth->finish;

$dbh->disconnect;


##### End Script #############

How to run
copy the contents from start script to end script and place in a file named extract.pl

$ chmod a+x extract.pl
$ ./extract.pl emp
$ more emp.dat