Opinionated Software

… we have opinions about everything!

PeopleTools and Parallelism on SQL Server —

During a recent discussion with another PS Admin running on SQL Server it became apparent that his efforts to improve application SQL performance through adding more CPUs (with the associated license costs) were based on a fundamentally wrong assumption:

PeopleTools/Application SQL will go parallel if needed.

This is simply not true. In fact, most SELECT SQL can never go parallel under PeopleTools on SQL server due to the fact that they are run through cursors and PeopleTools requests a cursor type of FAST_FORWARD. FAST_FORWARD cursors result in an execution plan with a NonParallelReason value of NoParallelFastForwardCursor.

Pretty self-explanatory I think. Of course, if the type requested had been FORWARD_ONLY then a parallel plan would be possible.

So all that additional CPUs will get you is more concurrency in terms of users, and perhaps some better speed for non-PS SQLs. Bear in mind that a very small percentage of delivered apllication SQLs are complex enough to even reach the default cost threshold for parallelism of 5 (I run my systems with a threshold of 50 to 100).

Unfortunately, there are times where parallel could help – mostly in power user queries, but they are all forced single threaded by the choice of cursor type.

Someone more cynical than myself might think this was deliberate 🙂

Top tip: Remember to add OPTION(MAXDOP 1) to PeopleTools SQL you paste into SSMS if you want to see an execution plan even close to the one PeopleTools gets. But you also need to be sure all of your connection options in SSMS (ARITHABORT etc) exactly match the ones used by your application/process scheduler servers to guarantee the same plan.


PeopleTools Indexing on SQL Server —

PeopleTools was designed to be database agnostic. In the past, the database support included databases such as Gupta SQLBase, Allbase, Informix and Sybase. The ability to support multiple platforms and add others relatively easily was a consequence of some excellent initial design decisions. Sadly, database platform support has dwindled to Oracle, SQL Server and DB2 in more recent releases of PeopleTools.

However, supporting multiple databases in this way meant that many database specific features were not taken advantage of. Since the Oracle takeover of Peopleoft, there have been msome oves to add some Oracle database specific feature support e.g. GTT, but almost nothing on the other databases. Read the rest of this entry »


PSTREENODE Clustered Index Change —

I noticed that in PT 8.55 (well at least in .08 and later) that the PeopleTools meta-data for the Clustered Index on PSTREENODE lost the “clustered” attribute. I’m not sure why this was changed but you should be aware that this change could impact your execution plans – especially if you use row level security.

PT 8.55 loses Clustered Flag on "key"

PT 8.55 loses Clustered Flag on “key”

 

"Key" is now Nonclustered

“Key” is now Nonclustered

 

And, more importantly what we now have is a “heap” and so data is no longer ordered as we might expect. This also forces any sequential access to refer to index allocation map pages. Overall, changing this is not something to be done lightly, and I do wonder at the thought process behind this change – or was it just a mistake?


What’s in a word? (\w regexp shorthand class) —

Well not just letters of the alphabet it seems.

Take the case of the logstash pattern WORD:

WORD \b\w+\b

but the shorthand character class \w matches [a-zA-Z0-9_] – notice the digits and underscore! So WORD is not really a WORD!

REALWORD \b[a-zA-Z]+\b

would be better … although I suppose things might be different in Unicode. But generally log files may be Unicode but frequently the data itself is still effectively ASCII.


REGEXP Problem —

A good friend asked me how to replace the last full stop (period) in a field with another character using a regular expression e.g. a comma. When I asked “what language?” it turned out he wanted to do this using REGEXP_REPLACE in Oracle SQL.

It’s a problem with a couple of solutions that I could immediately think of – one more “Oracle”, the other more “regular expression”.

First the Oracle solution:

REGEXP_REPLACE({FIELD},'\.',',',INSTR({FIELD},'.',-1))

This uses backward search capability of INSTR() to find position of the last ‘.’ in the field and return it so that the REGEXP_REPLACE only starts the replace from that point on-wards.

Now a “pure” regular expression version:

REGEXP_REPLACE({FIELD},'(\.)([^.]*$)',',\2')

This one is a bit more involved as it uses grouping and a negated character class ([^.] = “not a period”) as well as a back reference (\2).

 


CROSS/OUTER APPLY and Performance —

One quite useful technique to be aware of is the use of CROSS or OUTER APPLY with an in-line SQL e.g.

 

1
2
3
4
5
6
7
8
9
INSERT INTO {TABLE}
 
SELECT A.{FIELDS},APP.{FIELDS}
 
FROM {TableA} A CROSS/OUTER APPLY (
 
SELECT {more FIELDS} FROM {TableB} B WHERE .. A.{some_field} = B.{some_field}
 
) APP

However, with CROSS APPLY, the SQL Server optimizer tends to change the logic into regular joins. Mostly this is a good thing, but not if you want to “force” the driving table to be {TableA}. This can offer a considerable performance boost in some circumstances.

What you can do to force this behaviour is to add TOP 1 to the inner SELECT – assuming you expect only one row back from the CROSS APPLY SQL. Or you ordered the SQL result set and *do* just want the first row e.g. maximum/minimum effective date.

Note: CROSS/OUTER APPLY are SQL Server specific syntax for LATERAL in Oracle and others.


Powershell ISE Editor Tip —

Here’s an interesting tip – if you need an editor on a Windows server that handles Unix line endings you can use Powershell ISE … it’s not installed by default, but open a Powershell prompt and enter these commands: 

Import-Module ServerManager
Add-WindowsFeature PowerShell-ISE

And you will have a half-decent editor.


Elastic.On{2017} —

This was my first Elastic.On conference and I really enjoyed it. Not everything was perfect, so let’s get a few minor “gripes” out of the way first:

  • Some of the sessions that included customer stories were in a Q&A format with the questions being led by an Elastic interviewer. These sessions seemed all a bit contrived to me with very much a “sales” focus. Probably not the right type of session for a cynic like me to attend – I hoped to hear a bit more detail and less “Elastic are great!” or “Look how clever we are”.
  • Sound quality in the larger rooms was very variable – you had to choose where to sit very carefully. Stage B was particularly poor in some locations despite many speakers around the room.
  • Smaller session venues were difficult to hear in as the background noise from the open areas and the large venues tended to “drown out” the speakers in those venues.
  • Restroom capacity was insufficient at times with long queues.
  • 2.4 Ghz Wifi coverage was poor – it only seemed to be available in Stage B.
  • Someone massively underestimated the number of buses required to get everyone to the party at the California Academy of Science.

The positives:

  • The Android app for the conference provided great information. Kudos to the team that did that.
  • The food and drink was plentiful and of great quality. The food trucks were exceptional in both general quality and the range of options available. The Maine Lobster roll I had on one day was quite exceptional.
  • Speaker quality was overall extremely good – only one session did I leave early due to the speaker being quite poor – and that was only one factor in my decision to leave the session as the subject turned out to be not quite what I had hoped.
  • Due to the international nature of the people involved, English was not always the speaker’s first language but in all cases their English and diction was extremely good.
  • All the technical sessions I attended were very good and the upcoming features were really interesting. The whole stack is progressing and maturing rapidly.
  • The Elastic guys were all very approachable, helpful and nice to talk to. The customers I met also had some interesting use-cases to share and I certainly discovered a whole new range of applications for the stack.

So overall, would I go again? Most definitely yes! This is a great set of products developed by what looks to be a great team of talented people. I suspect they will need a bigger venue next year though …