Categories
PeopleTools Performance SQL SQL Server

PeopleTools and Parallelism on SQL Server

During a recent discussion with another PS Admin running on SQL Server it became apparent that his efforts to improve application SQL performance through adding more CPUs (with the associated license costs) were based on a fundamentally wrong assumption:

PeopleTools/Application SQL will go parallel if needed.

This is simply not true. In fact, most SELECT SQL can never go parallel under PeopleTools on SQL server due to the fact that they are run through cursors and PeopleTools requests a cursor type of FAST_FORWARD. FAST_FORWARD cursors result in an execution plan with a NonParallelReason value of NoParallelFastForwardCursor.

Categories
PeopleTools Performance SQL Server Tuning

PeopleTools Indexing on SQL Server

PeopleTools was designed to be database agnostic. In the past, the database support included databases such as Gupta SQLBase, Allbase, Informix and Sybase. The ability to support multiple platforms and add others relatively easily was a consequence of some excellent initial design decisions. Sadly, database platform support has dwindled to Oracle, SQL Server and DB2 in more recent releases of PeopleTools.

However, supporting multiple databases in this way meant that many database specific features were not taken advantage of. Since the Oracle takeover of Peopleoft, there have been some moves to add some Oracle database specific feature support e.g. GTT, but almost nothing on the other databases.

Categories
Performance SQL Server Trace Flags Tuning

SQL 2014 Cardinality Estimator Trace Flags

For reference:

  • 9481  – Use when running SQL Server 2014 with the default database compatibility level 120. Trace flag 9481 forces the query optimizer to use version 70 (the SQL Server 7.0 through SQL Server 2012) of the cardinality estimator when creating the query plan.
  • 2312 – Use when running SQL Server 2014 with database compatibility level 110, which is the compatibility level for SQL Server 2012. Trace flag 2312 forces the query optimizer to use version 120 (the SQL Server 2014 and later) of the cardinality estimator when creating the query plan.
Categories
App Engine Hints and Tips PeopleTools Performance

Re-Use in Application Engine

Almost without exception whenever I am asked to review the performance of a PeopleSoft system I discover issues that lead back to locally developed Application Engine processes. In particular, high database SQL parsing rates invariably originate from SQL steps that should have the re-use flag set to Yes.

What this flag does is replace any %bind() variables in the SQL step with proper bind variables and compiles the SQL only once. Without this flag, the %bind() variables are substituted as literals at run-time and executed directly. This can lead to huge parsing rates as typically the offending SQL steps are executed within a loop. Of course, this is generally contrary to what you should be doing with SQL – set processing, but all too often Application Engine is used as a direct replacement for procedural languages such as SQR.

Some metrics from a system I looked at a while ago:

Approximately 15300 SQL statements in the cache, of which over 7700 originated from a single Application Engine run just 9 times during the day. These 7700 could have been been reduced to 3 (well 2 actually) just by setting the ReUse flag to ‘Y’ on the three offending SQLs. Using set processing of course, none of them would have been needed 🙂

Categories
App Engine Hints and Tips Peoplecode Peoplesoft PeopleTools Performance Tuning

Jackson Structured Programming (JSP) – Read Ahead and PeopleCode

Jackson Structured Programming – now that brings back memories of my COBOL training at British Telecom in the late 1980’s.

What prompted this short post was a dreadful piece of hand-crafted PeopleCode to load a CSV file using a file layout. The usual “Operand of . is null” occurred unless the input file contained a “blank line” at the end.

The underlying reason for this was a failure to apply one of the fundamental techniques in JSP – the single read-ahead rule:

Single Read-ahead rule: Place the initial read immediately after opening the file, prior to any code that uses the data; place subsequent reads in the code that processes the data, immediately after the data has been processed.

In fact, this approach is exactly what you get when you drag a file layout into a PeopleCode step in Application Engine – sample code that uses the JSP single read-ahead rule.

Categories
Performance

More Hardware Vicar?

I’m old school. I started in “Personal Computing” when 2K of RAM was the norm. When I started programming in COBOL on IBM mainframes, we had machines with 8, 16 or 32 Megabytes of RAM. Not Gigabytes. Processor speeds were in single digit MegaHertz – not GigaHertz.

Which is why I often despair when developers are so accepting of PPP (Piss Poor Performance) and are too quick to say “we need more/better hardware”. This is, in my experience, rarely the reality. All too often it simply comes down to badly written code and/or badly designed code. Or utter rubbish SQL … 🙂

An old accountant colleague of mine was a keen advocate of the “reasonableness test”. If a number didn’t seem reasonable then it probably wasn’t. I like to apply a similar thought process to performance in program execution.

So why don’t developers *know* when performance is just plain *wrong*? I think it all comes down to having nothing to compare it against and little or no appreciation of how long things should take.

Categories
Administration Humour Oracle Peoplesoft PeopleTools Performance Tuning

A Conversation with a PeopleSoft “Developer” – Part 2

Another amusing conversation and somewhat shocking “discovery”.

More poor performing SQL – obviously missing any sort of suitable index (on Oracle).

Me (to developer): Could you add a suitable composite (multi-column) index to table PS_xxxx please for columns C,B and A – there seems to be frequent access using these three columns that would be vastly improved by adding an index.

Developer: OK.

After a few days I notice new “alternate search key” indexes in the DEV environment. One for each of the columns A, B and C.

Me: Did you add that index?

Developer: Yes, but in the testing I did it wasn’t very much faster.

Categories
Humour Peoplesoft Performance

A Conversation with a PeopleSoft “Developer” – Part 1

A bit of amusement, but unfortunately based on actual conversations I have had with developers over the years.

This one started with me noticing some heavy logical I/O at the database level across various views. These originated from various user actions – search records on pages, “submit” buttons on approval pages and even scheduled queries.

A bit of investigation into each case always brought me to a single locally developed “workflow” table. It had the usual structure for a workflow/worklist table – had all the workflow key columns, followed by the “data” key columns. Every column in the table was flagged as a key column. It was instantly obvious that the developer did not understand the importance of indexes and possibly quite a few other things. Here’s the conversation:

Me: So, this worklist table PS_xxx_WL – you created that right?

Developer: Yes.

Me: And you developed all the approval pages, the PeopleCode and the search records that use this table?

Developer: Yes (starting to look a bit nervous).

Categories
CRM Peoplesoft Performance Tuning

RC_BACKLOG_VW in PeopleSoft CRM 9.1

Seriously? Who thought it would be a good idea to create a view to return the DISTINCT list of BUSINESS_UNIT from RC_CASE? Talk about an O (n) performance disaster.

What’s wrong with a SELECT BUSINESS_UNIT FROM PS_BUS_UNIT_TBL_RC WHERE EXISTS (… case sub-query….).

Oh look … Two index seeks … shocker! It scales too …. wow … well I never.

Duh.

Categories
DataMover Peoplesoft PeopleTools Performance

DataMover EXPORT Performance

Some interesting statistics from some recent tests:

  • PeopleTools 8.49.28 EXPORT of a table containing 1.6 million rows (approx 1.6 GB of data): 3 hours 15 minutes
  • PeopleTools 8.53.20 IMPORT of the same data: 34 minutes
  • PeopleTools 8.53.20 EXPORT of the same table containing the identical 1.6 million rows (approx 1.6 GB of data): 24 minutes

Some background info that may be relevant:

  • 8.49 system is non-Unicode and uses CHAR()
  • 8.53 system is Unicode and uses VARCHAR() i.e. NVARCHAR()
  • The EXPORTS were done on identical VM’s in the data centre. The IMPORT was actually done “over the wire” using a drive mapping and via a client PC – I would expect much faster with a local input file and running in the data centre.

The speed difference between 8.49 and 8.53 in terms of EXPORT needs further investigation.