Lawson Query node record limit?

 15 Replies
 3 Subscribed to this topic
 52 Subscribed to this forum
Sort:
Author
Messages
jocelyn
Basic Member Send Private Message
Posts: 9
Basic Member

I have a nightly process that extracts customer records from Lawson.  Very simple: direct read, no conditions as we need to get the full dump of basic Customer data each night. , ARCUSTOMER and CUSTOMERDESC tables joined, returning only 5 columns.   Using SQL query, it takes nearly an hour to extract 32,000 records.  Thought I'd try using Lawson Query node to see if that could be faster.   Lawson Query bombs on maximum records exceeded.   So, questions:   Would Lawson Query be faster than SQL Query?   Is the maximum records limit a system setting?   I tried overriding it and setting Maximum records to 50,000 in query builder to no avail.    Just curious if this could be changed, if it will gain me speed in run time, or if I should just give up and stick with SQL Query.

I run the job through Landmark.  When it aborts it says to check the IOS logs for more information.  Where would IOS logs be?

Thanks!

Todd Mitchell
Veteran Member Send Private Message
Posts: 87
Veteran Member

I do not know about the Query node vs straight sql but I do know that we have way more customers than that and I have written queries against those 2 tables and have not had any issues with performance.  My from clause is this:

  from lawson.arcustomer arc
 inner join lawson.arcomp aco
    on (aco.company = arc.company)
 inner join lawson.custdesc cus
    on (cus.cust_group = aco.cust_group
    and cus.customer   = arc.customer)


Woozy
Veteran Member Send Private Message
Posts: 709
Veteran Member
Hi jocelyn - in my experience, SQL performance should be far better than a lawson query. I'm guessing there is a problem with your join or data selection. You need to make sure you are using the right key fields, which is what Todd is suggesting.

If you'd like to post your query, it might help to see where the issue is.

The IOS log is typically stored in $LAWDIR/system.
Kelly Meade
J. R. Simplot Company
Boise, ID
jocelyn
Basic Member Send Private Message
Posts: 9
Basic Member

Thanks for the responses.  You make a good point about using keys.  Ideally I would use Company and Customer to link the 2 tables, but only 1 table has Company.   CUSTDESC does not have Company; its keys are CUSTGROUP and CUSTOMER.   ARCUSTOMER has Company as key, but does not have CUSTGROUP.  Here's the SQL:

SELECT C.CUST_GROUP, A.CUSTOMER, C.NAME, A.COMPANY, A.NAT_FLAG
   FROM ARCUSTOMER AS A INNER JOIN CUSTDESC AS C
   ON A.CUSTOMER = C.CUSTOMER
   ORDER BY CUSTOMER, COMPANY

Woozy
Veteran Member Send Private Message
Posts: 709
Veteran Member
I just checked dbdoc (lawson database documentation utility) and found the relationship defined between ARCUSTOMER and CUSTDESC:

Customerdesc CUSTDESC Required
ACO-CUST-GROUP -> CUD-CUST-GROUP
ACM-CUSTOMER -> CUD-CUSTOMER

CUST-GROUP is the first primary key on CUSTDESC, so you might try adding that to your join and see if that helps.

Good Luck! Kelly
Kelly Meade
J. R. Simplot Company
Boise, ID
jocelyn
Basic Member Send Private Message
Posts: 9
Basic Member

Thanks.   I think that's pretty much what I've done.  Could be wrong - always room for error.  Doesn't appear to make a difference yet.   If I run in SQL studio it takes less than a second to give me result set.   Run the process through Landmark and it's a different story.   Updated SQL I'm using:

SELECT C.CUST_GROUP, A.CUSTOMER, C.NAME, A.COMPANY,A.NAT_FLAG
  FROM ARCUSTOMER AS A INNER JOIN ARCOMP AS B ON A.COMPANY = B.COMPANY
  INNER JOIN CUSTDESC AS C ON C.CUST_GROUP = B.CUST_GROUP
 WHERE  C.CUSTOMER = A.CUSTOMER
ORDER BY CUSTOMER, COMPANY
 

Bob Canham
Veteran Member Send Private Message
Posts: 217
Veteran Member
Are you writing to a file on each iteration? I've read that this is very inefficient and can cause the process to take longer. Maybe accumulate the file contents in a message builder and then write all at once?

One other thing I've heard people do is write a SQL stored procedure that will perform the query and write to the file and just use IPA to call it.
Woozy
Veteran Member Send Private Message
Posts: 709
Veteran Member
Bob has some good suggestions.

One other thought - you are joining ARCOMP and CUSTDESC instead of ARCUSTOMER to CUSTDESC. There isn't a relationship between ARCOMP and CUSTDESC but there is between ARCUSTOMER and CUSTDESC.

So, you might try changing your query slightly to:

SELECT
C.CUST_GROUP
,A.CUSTOMER
,C.NAME
,A.COMPANY
,A.NAT_FLAG
FROM
ARCUSTOMER AS A
INNER JOIN ARCOMP AS B
ON A.COMPANY = B.COMPANY
INNER JOIN CUSTDESC AS C
ON C.CUST_GROUP = A.CUST_GROUP
AND C.CUSTOMER = A.CUSTOMER
ORDER BY A.COMPANY, A.CUSTOMER

Also, your "ORDER BY" clause will completely kill your indexing, because there isn't a key in that order. If you must order it that way, then use your current query (minus the orderby) in another SELECT * FROM...ORDER BY CUSTOMER, COMPANY.
Kelly Meade
J. R. Simplot Company
Boise, ID
John Henley
Send Private Message
Posts: 3351
Use this:
INNER JOIN CUSTDESC AS C
ON C.CUST_GROUP = B.CUST_GROUP
AND C.CUSTOMER = A.CUSTOMER

not this:
INNER JOIN CUSTDESC AS C
ON C.CUST_GROUP = A.CUST_GROUP
AND C.CUSTOMER = A.CUSTOMER

...You are joining from ARCUSTOMER to ARCOMP to get the company's customer group, and using that as part of the join to CUSTDESC.
Thanks for using the LawsonGuru.com forums!
John
jocelyn
Basic Member Send Private Message
Posts: 9
Basic Member

Hi - I am writing the output to message builder first.   Once it's completed, the message builder contents are written to the output file.  

Unless I'm going blind (which could be the case late on Friday afternoon), I think the second SQL query is doing exactly what you are suggesting in the most recent SQL statement corrections.   So far, it's running 45 minutes through Landmark.

Bob Canham
Veteran Member Send Private Message
Posts: 217
Veteran Member
Do you have activity logging turned on? It will be bad for performance in production, but it might give you an idea as to the slowdown while testing. It would at least show you how long it is taking to do the sql query and then how much it spends iterating through it.
Woozy
Veteran Member Send Private Message
Posts: 709
Veteran Member
Oops - you're right John. My mistake.

I bet Bob is exactly right. Logging will KILL performance. We turn logging off completely for "batch" type flows, except when we are actually doing developing and need to look at something specific.
Kelly Meade
J. R. Simplot Company
Boise, ID
jocelyn
Basic Member Send Private Message
Posts: 9
Basic Member
Logging is turned off.
jocelyn
Basic Member Send Private Message
Posts: 9
Basic Member

A clarification: the logging level on the process is set to None in Landmark.    I don't know if there is other logging going on behind the scenes - I am not a Lawson or Landmark administrator, don't have the access or knowledge to dig around the bowels of the system.   I do think there is something very wrong somewhere in Lawson and/or Landmark because I changed the query to be a straight read on only one of the tables: ARCUSTOMER.   It has  33K records.  Since I'm pulling only 4 fields, this should be a snap.   Uploaded the process, launched the trigger for it.   It ran 48 minutes.   I think the query isn't the problem but I suspect our Lawson and/or Landmark installation is faulty:  configuration and/or space issues.  

Which brings me back to my original question in my post: I switched from a SQL query to using a Lawson query and the process always failed with the error that the maximum number of records was exceeded.   I cannot find any documentation on the Lawson query node that says there is a record limit.  So, I'm wondering if there is one or if one could have been configured during the software installation process.    If there is no record limit on the Lawson query node, then could this error indicate a not-enough-space available issue? 

John Henley
Send Private Message
Posts: 3351
Jocelyn,

You can try adding &MAX=99999 or &MAX=ALL as part of the query string, but regardless of what you put for the &MAX= value, the maximum number of records will be limited by the IOS parameter setting com.lawson.ios.dig.db.maxRecsQuery; even if you have a larger number, there is still a hard-coded limit of records (I think it's 10,000) in the IOS Data servlet. 

Using the SQL query bypasses that limitation. However, both the Lawson query and the SQL query return the results as javascript variables, so if when have a large number of rows, the amount of memory consumed for those variables grows considerably and the query returns very slowly as a result, e.g. 33K rows * 4 data columns = a lot of variables and memory.

One thing you might want to try is to shard through the data retrieval by looping and retrieving based on the last digit.

i.e. 

from x = 0 to 9 select xxx..... WHERE RIGHT(CUSTOMER,1) = x

That's what I do for some queries that I know will return a lot of data..

 

Thanks for using the LawsonGuru.com forums!
John
jocelyn
Basic Member Send Private Message
Posts: 9
Basic Member

Thanks for the info on the record limit.  I did try the MAX argument when using the Lawson query but it still failed on the maximum record limit surpassed error.   So the SQL query will be the way to go.

I'm not sure I quite understand the "shard" technique.   I understand using it to retrieve all customers ending in '1', then ending in '2', and so on.  But what are you doing with each of those result sets?   Writing them to MsgBuilder then appending them to a file? 

For a very simple, straightforward extract of 5 fields for 40K records - that amount of work should not be required.