PrevPrev Go to previous topic
NextNext Go to next topic
Last Post 01/30/2018 6:28 PM by  Kwane McNeal
Huge error-rpt files created when report errors
 8 Replies
Sort:
You are not authorized to post a reply.
Author Messages
Kert490
Programmer Analyst
Private
Basic Member
(10 points)
Basic Member
Posts:4


Send Message:

--
01/15/2018 12:40 PM
    We have had several instances where our Lawson drive space was suddenly eaten up with no warnings, going from under 50% to over 95% and causing failures. We have found that the space is being taken up by error files in the users print directory. There are 2 files created: error-rpt (around 40 GB) and error_rpt.dtl (around 5 GB). These are much larger than the normal files that are created when a report errors.

    We can go through and delete the files and restore the space, but we would like to prevent the files from being created to avoid issues with having the drive fill up while people are using the system.

    Does anyone know why these huge error files are being created?

    Is there any way to prevent their creation, since they have no particular use and are harmful to the system?

    Tags: Reports
    Kwane McNeal
    Private
    Private
    Veteran Member
    (1272 points)
    Veteran Member
    Posts:424


    Send Message:

    --
    01/15/2018 12:45 PM
    Make sure none of the batch jobs and invoked programs are compiled in trace or debug mode.

    Look in LAWDIR/product-line/obj for .idy and .int files, as those are usually the programs that have been compiled in trace or debug mode.

    Other than that, we would need to know more about the specific job and perhaps a snippet of the error-rpt file to know what’s happening
    (REMEMBER to not post a snippet containing sensitive data)
    Kert490
    Programmer Analyst
    Private
    Basic Member
    (10 points)
    Basic Member
    Posts:4


    Send Message:

    --
    01/16/2018 2:01 PM
    We don't have any examples of the giant error-rpt files, since we delete them when it happens, but I will post it if it happens again before it is resolved.

    I checked in our directory E:\INFOR10X\law\GHSE\obj for the .idy and .int files, but didn't find any. It is mostly just .gnt files which look to be compiled reports.

    Is there another place we can see if the reports were compiled in debug/trace mode?
    Kwane McNeal
    Private
    Private
    Veteran Member
    (1272 points)
    Veteran Member
    Posts:424


    Send Message:

    --
    01/16/2018 2:29 PM
    If you have no .idy or .int, then nothing is compiled in debug mode. Without knowing which programs are causing the issue or any snippets of the reports themselves, there’s nothing more I really could tell you.
    John Henley
    Private
    Private
    Senior Member
    (9647 points)
    Senior Member
    Posts:3233


    Send Message:

    --
    01/16/2018 5:51 PM
    This sounds like it's an error report generated by a particular batch job/report, not a job log file.
    Do you only see it on particular jobs/tokens, and if so, which one(s)?
    Thanks for using the LawsonGuru.com forums!
    John
    Kert490
    Programmer Analyst
    Private
    Basic Member
    (10 points)
    Basic Member
    Posts:4


    Send Message:

    --
    01/18/2018 2:18 PM
    The last two times the report created huge error files it was running the ap270. It was from 2 different users.
    John Henley
    Private
    Private
    Senior Member
    (9647 points)
    Senior Member
    Posts:3233


    Send Message:

    --
    01/18/2018 2:39 PM
    I would assume 1) you have a ton of AP data, and 2) the users are running AP270 with pretty broad parameters. If that is the case, that's just the way it is, and the users would need to be more cautious with the parameters.
    Thanks for using the LawsonGuru.com forums!
    John
    Kert490
    Programmer Analyst
    Private
    Basic Member
    (10 points)
    Basic Member
    Posts:4


    Send Message:

    --
    01/30/2018 2:55 PM
    I do believe that the parameters have something to do with it. The users that that have the issue are using the "Company Group" parameter, which seems very vague and no one seems to know what a company group is. Does anyone know where the company group is defined.

    The really odd thinks has to do with the REC_STATUS field in the APPAYMENT table. I did a trace while the AP270 was running forever (literally over 7 days) and found that it was querying the APPAYMENT table based on Company, at a rate of at least several times a second. The really odd thing is it kept incrementing the Value it queried on REC_STATUS. When I looked it was at 630,562,053 and climbing. Not sure how high it went. This is really bizarre since REC_STATUS is a tinyint and should only go to 256 and only contains 3 values.

    So why would the AP270 report think it needs to use values of around 1 billion in a query for a tinyint? I'd love to be able to figure out where it gets the values to query from, but I don't have the understanding at this point. Any clue on how this works?

    Kert
    Kwane McNeal
    Private
    Private
    Veteran Member
    (1272 points)
    Veteran Member
    Posts:424


    Send Message:

    --
    01/30/2018 6:28 PM
    So, I reread this thread, and I wanted to make a few observations, then ask a few questions, then attempt answer your questions...

    1) in your original post, you state that the error report has no use. The business would greatly disagree.
    2) based on the points you focus on, I get the impression you background may be more of a system admin or DBA. If my observation is correct, the next parts will be value to you.

    Based on that, my question is:
    1) if you are more of a DBA / SQL developer, are you familiar with COBOL, and more importantly, Lawson’s way of doing COBOL?

    Now, on to some potentially useful info.
    1) Lawson S3 typically doesn’t store data in normalized form, for a variety of reasons, the most important being:
    1) being rdbms vendor agnostic
    2) simplifying access via COBOL (tables more closely match COBOL cookbooks)
    ...because of these, accessing the data will very rarely be as efficient as 4GL style access on normalized data.
    That means you will see numerous redundant data lookups.

    Now with that said, here’s a few things you can do.
    1) see if you can tweak the caching values in the program’s cache configuration file (The AP270.cfg file). There is a guide on this on the Xtreme support site.
    This will help reduce the redundant lookups, but will NOT likely reduce the error report file sizes. As John put it, that will be up to the business users running the jobs in a more efficient manner.
    2) rewrite the report in a 3rd Party reporting tool, to eliminate the files and speed up. Understand, this CANNOT be done for any report that contains update logic of ANY KIND (barring logic for the CKPOINT table)

    A few other things:
    Whenever you see a file that has a corresponding .dtl file, that file is being INTENTIONALLY produced by the program, and will likely contain data relevant to the business. I really can’t understand why the files are getting so large. So I’d have the business really go through that and see if the output makes any sense. To John’s point, if you have a massive amount of data, and you run the AP270 wide open, it’s possible.

    In that case, you may want to discuss with the business purging data they no longer need
    You are not authorized to post a reply.