fsutil fsinfo ntfsinfo X:
where X: is the drive letter gives output like this:
NTFS Volume Serial Number : 0x101051a010518e1a
NTFS Version : 3.1
LFS Version : 2.0
Number Sectors : 0x0000000074498860
Total Clusters : 0x000000000e89310c
Free Clusters : 0x000000000447f222
Total Reserved : 0x00000000000016e2
Bytes Per Sector : 512
Bytes Per Physical Sector : 512
Bytes Per Cluster : 4096
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x0000000025ec0000
Mft Start Lcn : 0x00000000000c0000
Mft2 Start Lcn : 0x0000000000000002
Mft Zone Start : 0x0000000003cfca20
Mft Zone End : 0x0000000003d00040
Max Device Trim Extent Count : 512
Max Device Trim Byte Count : 0xffffffff
Max Volume Trim Extent Count : 62
Max Volume Trim Byte Count : 0x40000000
Resource Manager Identifier : 5646BA81-xxxx-yyyy-zzzz-185E0F1F2F38
The 260 character limit on the file path makes deleting the file from applications like Weblogic problematic (especially the .patch_storage sub-folder structure). As a result the PeopleTools DPK “cleanup” command doesn’t actually clean everything up.
ROBOCOPY D:\TEMP\EMPTY .patch_storage /PURGE
or /MIR to delete the files recursively, where D:\TEMP\EMPTY is an empty folder.
Further to my post on using Elasticsearch and Kibana to visualize SQL Server logspace usage, it turned out that this data allowed me to suggest an alternative scheduling time for a large update job (to span a log backup point).
As a result of this we should be able to reduce peak logspace usage from 25% of allocated space to around 20% – thus allowing the log file to be reduced in size dramatically.
I will be attending Elasticon 2017 in San Francisco in March.
Looking forward to it.
Placeholder for useful code snippet:
open my $fh, '<:raw:perlio:encoding(UTF-16LE):crlf', $filename
which will convert CR/LF combinations to LF only. Alternatively, to keep them intact:
open my $fh, '<:raw:perlio:encoding(UTF-16LE)', $filename
Useful for reading Windows registry export files, SQL server log export files etc.
There are many ways possible ways to collect logspace usage data from SQL Server – this was a quick way using the tools I had at hand – DBCC, perl, elastic search and kibana.
All I did was capture the output of:
into a temporary table using a simple perl script using DBI. I then did a SELECT against the temporary table in the same perl script and posted the resulting data into ES using the Search::Elasticsearch CPAN module. The ES index was very simple – just 4 columns: database, logspace, logspaceused (%) and the timestamp of the capture (GETDATE()).
After that, all I had to do was visualize the data using Kibana. Here’s some sample output:
Logspace (%) usage over time
A great way to see the log space pressure points which can easily be tied back to specific batch processes at those times.
A quick tip – if you want lots of debugging feedback in CRM Display Template rendering, just create a userid CSPEER (Chris Speer) and use that. Chris Speer wrote a lot (all?) of the Application Package code for Display Templates and handily left debugging (messagebox) code in place that only happens when the current logged on user is CSPEER.
Useful to know.
Note: You will need to hack the filename the debug output goes to – it still refers to a UNC path of a machine at PeopleSoft.
Of course, if you don’t want to edit the code at all you could:
- Create a NetBIOS alias called “sclappp532” on the application server through a Windows registry entry.In HKLM\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters, just add a string value called OptionalNames with a value “sclapps532”. Personally, I would also create a matching DNS CNAME entry for completeness.
- As the PeopleCode filename refers to a share name CR900DVL_LOGS you will also need to create that.
I came across this implementation of a hashtable in PeopleCode that uses the java API:
Implementing a HashTable in PeopleCode
But it is worth noting that there is another PeopleCode only implementation already delivered in PeopleTools:
It is trivial to add support for INCLUDE() and WHERE() clauses on SQL Server indexes in PeopleSoft – just change the model DDL for Index creation to have the two optional clauses defaulted to blank and then override on the specific index as needed:
Obviously, at the individual index level you will need the full syntax including the keywords e.g.
INCLUDE (col1,col2,col3 …)
Be aware that the criteria you can include is limited – refer to the Microsoft documentation for more details.