Categories
Hints and Tips Java Peoplecode

Ceil/Ceiling function in PeopleCode

I just used the java Math library ceil function from PeopleCode to solve the “round to nearest 0.5” problem e.g.

1
2
3
4
5
6
7
8
9
10
11
Local JavaObject &mathclass;
Local number &number_to_round, &result;
 
/* Instantiate java Math class */
&mathclass = GetJavaClass("java.lang.Math");
 
For &number_to_round = 0.1 To 2.0 Step 0.1
/* Use ceil function from java to solve problem */
&result = &mathclass.ceil(&number_to_round * 2) / 2;
MessageBox(0, "", 0, 0, "Number to Round: " | &number_to_round | " Result: " | &result);
End-For;
Categories
Hints and Tips SQL

CRLF and “GO”

I frequently find myself writing SQL that generates SQL. On SQL Server, I like to build a string that contains the SQL statement followed by “GO”. To do that I use CHAR(13)+CHAR(10)+’GO’+CHAR(13)+CHAR(10) at the end of the string e.g.

{some_generated_SQL}+CHAR(13)+CHAR(10)+’GO’+CHAR(13)+CHAR(10)

That way, when I paste the query results into SSMS or a text editor I get:

1
2
3
4
5
{first_generated_SQL}
GO
{second_generated_SQL}
GO
...
Categories
Hints and Tips PeopleTools PUM VirtualBox Windows

EnableLinkedConnections and VirtualBox PUM Images

When you map a network drive to the Samba share of a VirtualBox PUM VM in order to install (say) the PeopleTools client, the mapped drive may be invisible to your cmd prompt running as Administrator – something you need in order to update the registry and install the client software.

To work around this on Windows 7 through 10, see this article:

https://technet.microsoft.com/en-us/library/ee844140(v=ws.10).aspx

Categories
Hints and Tips Rants

I’m not a smart-arse

Well not all of the time…

When I re-read some of my posts I wonder if I come across as an utter smart-arse. Often my posts appear to be critical of all developers in general – but in many cases my criticisms can be traced back to two fundamentals:

  • Aptitude and analytical skills.

Both of which appear (to me at least) to be “dying arts”.

Categories
App Engine Hints and Tips PeopleTools Performance

Re-Use in Application Engine

Almost without exception whenever I am asked to review the performance of a PeopleSoft system I discover issues that lead back to locally developed Application Engine processes. In particular, high database SQL parsing rates invariably originate from SQL steps that should have the re-use flag set to Yes.

What this flag does is replace any %bind() variables in the SQL step with proper bind variables and compiles the SQL only once. Without this flag, the %bind() variables are substituted as literals at run-time and executed directly. This can lead to huge parsing rates as typically the offending SQL steps are executed within a loop. Of course, this is generally contrary to what you should be doing with SQL – set processing, but all too often Application Engine is used as a direct replacement for procedural languages such as SQR.

Some metrics from a system I looked at a while ago:

Approximately 15300 SQL statements in the cache, of which over 7700 originated from a single Application Engine run just 9 times during the day. These 7700 could have been been reduced to 3 (well 2 actually) just by setting the ReUse flag to ‘Y’ on the three offending SQLs. Using set processing of course, none of them would have been needed 🙂

Categories
App Engine Hints and Tips Peoplecode Peoplesoft PeopleTools Performance Tuning

Jackson Structured Programming (JSP) – Read Ahead and PeopleCode

Jackson Structured Programming – now that brings back memories of my COBOL training at British Telecom in the late 1980’s.

What prompted this short post was a dreadful piece of hand-crafted PeopleCode to load a CSV file using a file layout. The usual “Operand of . is null” occurred unless the input file contained a “blank line” at the end.

The underlying reason for this was a failure to apply one of the fundamental techniques in JSP – the single read-ahead rule:

Single Read-ahead rule: Place the initial read immediately after opening the file, prior to any code that uses the data; place subsequent reads in the code that processes the data, immediately after the data has been processed.

In fact, this approach is exactly what you get when you drag a file layout into a PeopleCode step in Application Engine – sample code that uses the JSP single read-ahead rule.

Categories
Fedora Linux Oracle Linux Peoplesoft Tuning VirtualBox

Reducing PeopleSoft DPK VM Size using zerofree

One of the slightly irritating parts of the build of a VirtualBox VM using the PUM downloads is the fact that the build script copies the DPK files into the VM prior to unpacking them. The data is stored under /opt/oracle/psft/dpk which is a mount point for “disk2”. Typically this disk expands to 23GB+ during the build process as a result of this approach.

To reduce the size of the VMs, what I like to do after the VM has been built is to:

  1. Attach the disk2, disk3 and disk 4 .vmdk’s to a simple Linux server VM – I use a minimal Fedora 24 install but it doesn’t really matter just so long as the zerofree utility is installed. Note: You could install zerofree into the PeopleSoft VM and do this step using Oracle Linux, but I use a smaller Linux install as it boots quicker.
  2. Boot the VM and mount the three disks read/write
  3. Delete any large files I no longer need e.g. the DPK tgz/zip files, PeopleTools 8.53/8.54 client folders, ptengs.db
  4. Re-mount the disks read only (mount -o remount,ro /dev/sd[bcd]1 {mount-point} )
  5. Run zerofree -v /dev/sd[bcd]1 on each disk to zero the empty space created by the file deletions.

I then close down the VM, detach the .vmdk files and clone the .vmdk disk files to .vdi files using virtual media manager. This has the effect of shrinking the resulting files – essentially doing a “VBoxManage modifyhd {vdi_file} –compact“.

Once I have the .vdi versions of the files, I remove the .vmdk files from the PeopleSoft VM, add back the .vdi files, boot the PeopleSoft VM, test it and delete the original .vmdk files if everything works.

In general, this approach releases approximately 27 Gb per VM – making the resulting VMs around 30-36 Gb. Still absurdly big of course 🙂

Categories
Configuration Elasticsearch ELK Hints and Tips Peoplesoft VirtualBox

DPK VirtualBox Memory Allocation

Even though my laptop has a decent 12 Gb of RAM, I still like to minimise the RAM allocated to PeopleSoft VMs.

My experience is that the sweet spot is 3072 Mb for VMs without SES – I never bother with the beast that is SES. After all it is a dead application – Elasticsearch cannot arrive soon enough for me. You can get away with 2560 Mb of RAM but you will see some swapping in OEL – not good even if you have a fast SSD. Mine is “ok” – a Samsung 1 TB 850 EVO but allowing any swapping still makes the system slow down considerably.

Categories
App Engine Languages Peoplesoft PeopleTools Perl Process Scheduler SQR

Perl and PeopleSoft

Way back in 1998 I was implementing PeopleSoft Financials 7.5 for a UK Charity. SQR and Application Engine (the COBOL version back then) were the only options available in the PeopleSoft toolset for updating the database. Other than straight SQL updates in SQLPlus of course!

Whilst SQR was an OK tool, I always felt it lacked so many capabilities. In fact, at that point it could not even read a CSV file – I had to code a user DLL in C to achieve even that. All very frustrating.

Having rescued various projects using perl scripts prior to this, I decided I would add perl as an available language to process scheduler. Taking the SQR include files for the process scheduler API as an example, I emulated the same approach with perl. It worked brilliantly and allowed me to add some sophisticated features to PeopleSoft including:

  • SQL and query output to CSV and XLS formats (remember this was prior to the PeopleSoft Internet Architecture) through the SpreadSheet::WriteExcel, and DBD::CSV CPAN modules
  • User defined SFTP/FTP/SCP file transfers to and from third-party systems
  • Bank Statement loads by encapsulating mainframe remote access software into process scheduler jobs
  • Exchange rate loading via Website “screen scraping”
  • Spreadsheet Aged Debt reporting
  • Fuzzy duplicate customer identification/matching
  • Automatic customer identification in Accounts Receivable

Here’s the start of one such perl script from 2004:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#!/usr/bin/perl 
#
# This is a perl routine to find possible matches for originator's
# sort code and bank account by looking to find possible customers.
#
# (1) Fetch the list of bank statement entries.
# (2) Try to find customer like this.
#
# Author: XXX
# Date : 29th January 2004.
#
# Amendment History
# -----------------
# 29-JAN-2004 XXX First version
#
#$debug = 1;
use lib 'h:\perl';
use Strict;
use Spreadsheet::WriteExcel::Big;
use String::Approx qw(amatch);
require 'prcsapi.pl';
use Date::Calc qw (Delta_Days);
$row = 0;
 
#
# Connect to database using parameters resolved from command line
#
$dbh = DBI->connect( "dbi:$dbtype:$dbname", "$accessid", "$accesspswd" ) or die $dbh->errstr;

The require of prcsapi.pl brings in all the necessary sub-modules needed for the process scheduler API. Updating the process scheduler status is then simply a call to the appropriate API function:

1
Update_Process_Status($prcs_run_status_processing,'Processing has started.');

More recently, I have taken a similar approach but for ruby …. more on that later.

Enjoy.

Categories
Performance

More Hardware Vicar?

I’m old school. I started in “Personal Computing” when 2K of RAM was the norm. When I started programming in COBOL on IBM mainframes, we had machines with 8, 16 or 32 Megabytes of RAM. Not Gigabytes. Processor speeds were in single digit MegaHertz – not GigaHertz.

Which is why I often despair when developers are so accepting of PPP (Piss Poor Performance) and are too quick to say “we need more/better hardware”. This is, in my experience, rarely the reality. All too often it simply comes down to badly written code and/or badly designed code. Or utter rubbish SQL … 🙂

An old accountant colleague of mine was a keen advocate of the “reasonableness test”. If a number didn’t seem reasonable then it probably wasn’t. I like to apply a similar thought process to performance in program execution.

So why don’t developers *know* when performance is just plain *wrong*? I think it all comes down to having nothing to compare it against and little or no appreciation of how long things should take.