User:PoC: Difference between revisions

From Try-AS/400
Jump to navigation Jump to search
(New)
 
(Wrong link, is now public anyways)
 
(8 intermediate revisions by the same user not shown)
Line 2: Line 2:


== Unofficial Benchmarks ==
== Unofficial Benchmarks ==
For getting a rough idea about CPU speed, I have a physical table about German ZIP codes, and long distance telephone prefixes, filled with nearly 20,000 records in six fields, summing up to 2.25&thinsp;MiB of data. From the original source in the internet, there were a lot of duplicate records. Birgitta Hauser came up with an quick and dirty idea how to eliminate these dupes.<ref>She admitted, ''it might be a bit slow''.</ref>
For getting a rough idea about CPU speed, I have a physical table about German ZIP codes, including long distance telephone prefixes, filled with nearly 20,000 records in six fields, summing up to 2.25&thinsp;MiB of data. From the original source in the internet, there were a lot of duplicate records. Birgitta Hauser came up with a quick and dirty idea how to eliminate these dupes.<ref>She admitted, ''it might be a bit slow''.</ref>


Since the SQL proved to be running really slow, I came up to use it as a benchmark. Of course, it's not 100&thinsp; fair, because of different OS releases and accompanying improvements in SQL handling. But it gives a rough idea.
Since the SQL proved to be running really slow, I came up to use it as a benchmark. Of course, it's not 100&thinsp; fair, because of different OS releases and accompanying improvements in SQL handling. But it gives a rough idea.
Line 18: Line 18:
|2270
|2270
|V4R5
|V4R5
|Canceled after 6 hrs
|About 118 hrs.
|Is currently running again until the very end.
|&nbsp;
|-
|-
|9406-S20
|9406-S20
|2161
|2161
|V4R4
|V4R4
|About 13.5 hrs
|About 13.5 hrs.
|&nbsp;
|&nbsp;
|-
|-
Line 30: Line 30:
|7408
|7408
|V5R2
|V5R2
|About 2 hrs
|About 2 hrs.
|&thinsp;
|&thinsp;
|-
|-
Line 39: Line 39:
|LPAR with two CPU cores assigned.
|LPAR with two CPU cores assigned.
|}
|}
----
=== Used files ===
* DDS for the database as follows:
A                                      UNIQUE
A          R PLZTBL
A            ORT          32A
A            LAND          17A
A            VORWAHL        6A        ALWNULL
A            PLZ            5A
A            ID            5S 0
A            ZUSATZ        30A        ALWNULL
A          K ID
* SQL for ''RUNSQLSTM'' as follows:
DELETE FROM PLZPF A
  WHERE A.ID = (
  SELECT MAX(ID) FROM PLZPF
    GROUP BY ORT, LAND, VORWAHL, PLZ, ZUSATZ
    HAVING COUNT(*) > 1 AND MAX(ID) = A.ID)
* Submit Command for ''RUNSQLSTM'' as follows:
SBMJOB JOB(DLTPLZDUPS) CMD(RUNSQLSTM SRCFILE(PLZ/SOURCES) SRCMBR(DLTDUPS) COMMIT(*NONE) NAMING(*SQL) DFTRDBCOL(PLZ))
I can provide a save file of the whole library on request.
The switchback to the traditional CQE SQL Engine involves creating a copy of ''QSYS/QAQQINI'', updating this copy through SQL, SQE_NATIVE_ACCESS.
=== Weblinks ===
* [https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzajq/createqaqqini.htm Creating a QAQQINI File]
* [https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzajq/specifyqaqqini.htm Change Query Attributes for using the desired QAQQINI file]
* [https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzajq/rzajqqaqqiniformat.htm QAQQINI File Format]


== Footnotes ==
== Footnotes ==
<references />
<references />

Latest revision as of 01:48, 27 August 2023

I am the principal maintainer of this Wiki.

Unofficial Benchmarks

For getting a rough idea about CPU speed, I have a physical table about German ZIP codes, including long distance telephone prefixes, filled with nearly 20,000 records in six fields, summing up to 2.25 MiB of data. From the original source in the internet, there were a lot of duplicate records. Birgitta Hauser came up with a quick and dirty idea how to eliminate these dupes.[1]

Since the SQL proved to be running really slow, I came up to use it as a benchmark. Of course, it's not 100  fair, because of different OS releases and accompanying improvements in SQL handling. But it gives a rough idea.

The SQL was run from a source PF member as standard batch job (default parameters) on idling machines (no additional work).

Machine Proc. Feat. OS Runtime (ca.) Comments
9401-150 2270 V4R5 About 118 hrs.  
9406-S20 2161 V4R4 About 13.5 hrs.  
9406-800 7408 V5R2 About 2 hrs.
8203-E4A 5587 V7R2 1-2 Minutes LPAR with two CPU cores assigned.

Used files

  • DDS for the database as follows:
A                                      UNIQUE
A          R PLZTBL
A            ORT           32A
A            LAND          17A
A            VORWAHL        6A         ALWNULL
A            PLZ            5A
A            ID             5S 0
A            ZUSATZ        30A         ALWNULL
A          K ID
  • SQL for RUNSQLSTM as follows:
DELETE FROM PLZPF A
 WHERE A.ID = (
  SELECT MAX(ID) FROM PLZPF
   GROUP BY ORT, LAND, VORWAHL, PLZ, ZUSATZ
   HAVING COUNT(*) > 1 AND MAX(ID) = A.ID)
  • Submit Command for RUNSQLSTM as follows:
SBMJOB JOB(DLTPLZDUPS) CMD(RUNSQLSTM SRCFILE(PLZ/SOURCES) SRCMBR(DLTDUPS) COMMIT(*NONE) NAMING(*SQL) DFTRDBCOL(PLZ))

I can provide a save file of the whole library on request.

The switchback to the traditional CQE SQL Engine involves creating a copy of QSYS/QAQQINI, updating this copy through SQL, SQE_NATIVE_ACCESS.

Weblinks

Footnotes

  1. She admitted, it might be a bit slow.