This topic contains 12 replies, has 0 voices, and was last updated by TheUsualSuspect 7 years, 3 months ago.

  • Author
    Posts
  • #18401

    TheUsualSuspect

    I wrote a map reduce script to mass delete records using either a saved search or CSV as input as this comes up a lot. My script work excellent in some accounts that have 20-30 queues and deletes 10k journal entries in a flash. I am running into usage issues when needing to delete large data sets however and would like to know if someone could share insight as to why.

    To my knowledge Map / Reduce goveranance works as specified below

    GetInputData -> 10,000 usage

    Map -> 1,000 usage per invocation

    Reduce -> 5,000 usage per invocation

    Sumary -> 10,000 usage

    I am getting a “Script Execution Usage Limit Exceeded” despite only deleting 50 records in a reduce stage. In this case I’m deleting only contacts which is like 10/15 units. 750 This is a cached copy. Click here to see the original post.

  • #18402

    teddycaguioa

    Try logging the remaining execution unit to see where the usage is going..

  • #18403

    Olivier Gagnon NC

    Do the records being deleted trigger their own scripting, like say afterSubmit scripts?

  • #18404

    TheUsualSuspect

    I am unfamiliar with this particular account so I’m not entirely sure what is being triggered but the scripted records section has no mention of sales orders or any transactions having UE’s attached.

    Does the usage from the user events and other script bubble up from a record.delete call?

  • #18405

    TheUsualSuspect

    Originally posted by teddycaguioa

    View Post

    Try logging the remaining execution unit to see where the usage is going..

    I ran my script to clear out ~ 100K custom records. It did not finish and was not near the limit on usage per queue

    There appears to be a hard cap but my usage of 88,000 is wayyyy past that. Can I get a clear indication of what the rules are for map/reduce?

  • #18406

    borncorp

    I don’t see why you would be running out of points, but you could try moving the main logic to a backend Suitelet and have the Map Reduce call it passing the recordid to delete. Just a thought.

  • #18407

    TheUsualSuspect

    Originally posted by borncorp

    View Post

    I don’t see why you would be running out of points, but you could try moving the main logic to a backend Suitelet and have the Map Reduce call it passing the recordid to delete. Just a thought.

    Aren’t there concurrency limits to suitelets? If I have 30 queues constantly querying a suitelet I think I’ll get an error. That being said if there are true hard caps on map/reduces and this design pattern is possible, it might be the way to ensure the script has enough usage.

    I honestly don’t care how fast map/reduce is if its inconsistent, the main thing is that it will finish its run. If map/reduce remains this way I’ll probably just move back to 1.0 Scheduled Scripts

  • #18408

    borncorp

    Originally posted by TheUsualSuspect

    View Post

    Aren’t there concurrency limits to suitelets? If I have 30 queues constantly querying a suitelet I think I’ll get an error. That being said if there are true hard caps on map/reduces and this design pattern is possible, it might be the way to ensure the script has enough usage.

    I honestly don’t care how fast map/reduce is if its inconsistent, the main thing is that it will finish its run. If map/reduce remains this way I’ll probably just move back to 1.0 Scheduled Scripts

    Yeah, I just recently ran some load testing for +50 Concurrent Suitelets vs Map Reduce. Check it out at https://ursuscode.com/netsuite-tips/…rent-suitelet/ . I doubt you will run into those issues though but it’s always good to take it into consideration. What I have experienced is that Suitelets sometimes lose connection, so your requests might get a timeout response, so you gotta have code it in a way there’s a failback mechanism to get the ones that errored out.

  • #18409

    david.smith

    Why are you putting your delete logic in a loop of a function that’s called from the reduce stage? Where is your array coming from? Every time it goes through the Map can you delete at that point?

  • #18410

    bmesa

    Originally posted by david.smith

    View Post

    Why are you putting your delete logic in a loop of a function that’s called from the reduce stage? Where is your array coming from? Every time it goes through the Map can you delete at that point?

    The array seems to be coming from the reduceContext object. Also, I agree with your point about deleting within the mapping stage. I did that when I had to delete millions of records. Saved search to gather the ids, then in the Map phase delete the record (the mapContext contains the record type and internal ids).

    I still hit the usage limit, but it was a legit error (considering the massive amounts of data).

    EDIT: As for the original question: I am unsure why you are running out. But for M/R, it should just keep restarting itself if it runs out of usage until the job is done.

  • #18411

    chanarbon

    Originally posted by borncorp

    View Post

    I don’t see why you would be running out of points, but you could try moving the main logic to a backend Suitelet and have the Map Reduce call it passing the recordid to delete. Just a thought.

    I would rather not perform this approach. On a small number of records, the running time difference is negligible but let’s say you do around 100+ call during a function invocation, you would see the running time difference. I do agree with david.smith ‘s suggestion with placing the action on map since it would perform the action per result basis/per id basis. Some other approach is improve the logic in map so that the sorting of results going into the reduce would be smaller and you won’t hit the usage unit exceeded.

  • #18412

    TheUsualSuspect

    Hi All,

    Not sure what happened but I redesigned the script and it works now. It constructs the search in the getInputStage and doles out slices of it to the map stage to search then deletes the records in the reduce stage. This approach no longer is running into the usage issue even with larger inputs.

    I think the major difference is that the getInputStage is drastically reduced in time and maybe that was the issue with usage?

  • #18413

    TheUsualSuspect

    Originally posted by bmesa

    View Post

    The array seems to be coming from the reduceContext object. Also, I agree with your point about deleting within the mapping stage. I did that when I had to delete millions of records. Saved search to gather the ids, then in the Map phase delete the record (the mapContext contains the record type and internal ids).

    I still hit the usage limit, but it was a legit error (considering the massive amounts of data).

    EDIT: As for the original question: I am unsure why you are running out. But for M/R, it should just keep restarting itself if it runs out of usage until the job is done.

    I am doing a very similar design now to what you and chanarbon recommended. Thanks!

You must be logged in to reply to this topic.