Batch request compression

Dear Gateway Community,
I was wondering if it is possible to have the response of a Gateway batch request (READ only) using the gzip compression.
Indeed, I've noticed that after I had grouped my requests into a batch (just like best practices guides recommend), the payload went from 838Kb to 7.1 Mb because the response was not using any content-encoding.
I'm using the Data.js library used by SAP themselves and tried adding the Accept-encoding header parameter without succes.
var request = {
    headers : {
       "Accept-Encoding": "gzip, deflate"
    requestUri : '/sap/opu/odata/sap/ZGW_SERVICE/flightCollection',
    method : "GET"
  var batchRequests = [request];
  OData.request({
    headers : {
      "Accept" : "application/json",
      "X-CSRF-Token" : token,
                "Accept-Encoding": "gzip, deflate"
    requestUri : '/sap/opu/odata/sap/ZGW_SERVICE/$batch',
    method : "POST",
    data : {
      __batchRequests : batchRequests
  }, function () {
    console.log('success');
    debugger;
  }, function () {
    console.log('failure');
  }, OData.batchHandler);
Maybe that's got something to do with the fact that the response content type with batch requests is multipart/mixed.
Thanks in advance for any help on this matter,
Cheers,
Paul

Hello Ron,
Thanks for your answer.
Single GET request (non batch) are triggered with the "Accept-encoding" HTTP header parameter set to "gzip" which indicates the server that, if supported by the server, the response payload should be compressed.
The entity collection has 20 000 entries. The JSON format reduces the payload but not significantly enough to make it much faster.
Best regards,
Paul

Similar Messages

  • Incomplete Respose for Batch Request

    Folks,
    We are using batch processing for some of our transactions (mainly write use-cases) and we are facing issues with the response of the $batch request when there's an error (business error) in one of the change sets. The issue with the response is that we don't get the response body after the first request that fails (HTTP 400/Bad Request), however, the response headers and the transaction completes successfully for the rest of the change sets. I have listed below a sample of the request and response to illustrate the issue that we are facing.
    Request:
    --batch
    Content-Type: multipart/mixed; boundary=changeset
    --changeset
    Content-Type: application/http
    Content-Transfer-Encoding: binary
    POST POSTQTYCollection HTTP/1.1
    Content-Type: application/json
    Content-Length:672
    {<request body>}
    --changeset--
    --batch
    Content-Type: multipart/mixed; boundary=changeset
    --changeset
    Content-Type: application/http
    Content-Transfer-Encoding: binary
    POST POSTQTYCollection HTTP/1.1
    Content-Type: application/json
    Content-Length:671
    {<request body>}
    --changeset--
    --batch
    Content-Type: multipart/mixed; boundary=changeset
    --changeset
    Content-Type: application/http
    Content-Transfer-Encoding: binary
    POST POSTQTYCollection HTTP/1.1
    Content-Type: application/json
    Content-Length:673
    {<request body>}
    --changeset--
    --batch--
    Response:
    --ejjeeffe0
    Content-Type: multipart/mixed; boundary=ejjeeffe1
    Content-Length:      1362
    --ejjeeffe1
    Content-Type: application/http
    Content-Length: 1241
    content-transfer-encoding: binary
    HTTP/1.1 201 Created
    Content-Type: application/atom+xml;type=entry
    Content-Length: 1013
    location: xxx/POSTQTYCollection('124557.1074')
    dataserviceversion: 2.0
    <RESPONSE BODY WITH ENTITY>
    --ejjeeffe1--
    --ejjeeffe0
    Content-Type: application/http
    Content-Length: 1211
    content-transfer-encoding: binary
    HTTP/1.1 400 Bad Request
    Content-Type: application/xml
    Content-Length: 1006
    location: xxx/POSTQTYCollection('')
    dataserviceversion: 1.0
    <RESPONSE BODY WITH ERROR>
    --ejjeeffe0
    Content-Type: multipart/mixed; boundary=ejjeeffe1
    Content-Length:       312
    --ejjeeffe1
    Content-Type: application/http
    Content-Length: 192
    content-transfer-encoding: binary
    HTTP/1.1 201 Created
    Content-Type: text/html
    Content-Length: 0
    location: xxx/POSTQTYCollection('')
    dataserviceversion: 2.0
    <NO RESPONSE BODY>
    --ejjeeffe1--
    --ejjeeffe0--
    As you see, the last change set has a valid response (HTTP 201), however, the body of the response is not contained in the multipart response. The request is formatted correctly since the entire transaction works as expected; i.e. the first change set get committed, the second gets rolled back and the final one gets committed. However, the response body for the requests after the request that fails does is not sent by Gateway.
    Does anyone know a solution for the same? We rely on the response of the CREATE to further processing and hence cannot rely just on the HTTP code to know that the CREATE was successful.
    Thanks in advance,
    Henry

    Hello Henry,
    We are firing the CREATE/POST operation in the BATCH mode exactly with the same type of PAYLOAD what you have shared here and we are correctly getting response.
    1st operation was successful and we have got content with 201 Created Status Code.
    2nd operation failed and got 400 bad request with some error.
    3rd operation was successful and we have got the content with 201 Created Status Code.
    I am unable to replicate your problem in my case. Everything is working fine for me.
    Your problem is strange.
    Regards,
    Ashwin

  • 0IC_C03 Request Compression Status

    Hi Friends,
    We have loaded data into the Inventory InfoCube 0IC_C03. The Deltas are running.
    I would like to know that how the 2LIS_03_BF is compressed for certain Request number. Beacuse while validating the data in for the cube inventory cube .........we are getting mismatched values. When i go and see in the InfoCube  requests there is Repeat Delta for the 2LIS_03_BF. I want to know that how that Request compressed in the cube whether it is copressed with Marker Update or No Marker Update.
    Thanks&Regards
    Anand

    Hi,
    You can check this at the compression log as below
    goto ---> Info Cube MANAGE --> COLLAPSE --> LOG ---> select the date of the request and click on ->APPLICATION LOG.
    You will find APPLICATION LOG SCREEN just execute you will find the request and double click on that the you will find the log as below.
    Mass upsert of markers executed (13781 data records)
    you can check the log.
    Regards,
    Prabhakar.

  • EWS - Office 365 - "One or more subscriptions in the request reside on another Client Access server. GetStreamingEvents won't proxy in the event of a batch request."

    Hello
    My goal is to subscribe for streaming notifications for multiple users in the same time.
    One way to do that is to create multiple  StreamingSubscriptionConnections each one should contain one  StreamingSubscription for each user. The problem with this method is that in Office 365 the maximum
    number of connections opened is 20.
    Another method to solve this problem is by creating one StreamingSubscriptionConnection and then all StreamingSubscriptions for each user to the connection. This method solves the maximum number of connections
    problem and it works fine with exchange onPrimises. But when trying it with Office 365 it will result with the SubscriptionError:
    "One or more subscriptions in the request reside on another Client Access server. GetStreamingEvents won't proxy in the event of a batch request."
    Can anyone help me here ? 

    With Office365 you need to Group your subscriptions and set the Affinityheaders see
    http://msdn.microsoft.com/en-us/library/office/dn458789(v=exchg.150).aspx and
    http://blogs.msdn.com/b/mstehle/archive/2013/07/17/more-affinity-considerations-for-exchange-online-and-exchange-2013.aspx . Take note of the restrictions on the Group and other throttling restrictions if your using only one service account.
    Cheers
    Glen

  • 'Create Settlement Batches' Request and Output file name

    Hello,
    Is there a table/view which keeps 'Create Settlement Batches' request id and name of the output file of that request ?
    Output file is created in 4456_1.out format. I can see this file name in IBY.IBY_BATCHES_ALL table BATCHID column and
    IBY.IBY_TRXN_SUMMARIES_ALL table BATCHID column. Non of the tables do not contain the concurrent request id.
    How can we find out from tables/views the name of the output file, if you know the request id ?
    Thank You,
    sruwan

    Hi Hussein,
    I already looked at FND_CONCURRENT_REQUESTS. It doesn't contain the output batch file name. It has a column called 'OUTFILE_NAME' and it points to
    a file called concurrent_request_id.out file ( Eg: 123456789.out ). This file is blank when you run 'Create Settlement Batches' request.
    'Create Settlement Batches' request also crates an output batch file in the format 1234_1.out. Problem is finding a pace linking that file name to concurrent request id.
    Thanks for your reply,
    sruwan

  • Web API OData authentication and batch requests

    Hello,
    We have and OWIN middleware integrated into our web application pipeline which serves the purposes of authentication of OData requests by JWT token. On successful authentication we set the Thread and HttpContext principal to custom Principal object of ours.
    The problem arises when performing OData batch requests. The sub-requests of batch are executed in separate threads and with no HttpContext associated with it so we loose the authenticate principal in those requests. I guess this is due to the way OData batch
    requests are executed but our server logic strongly depends on custom Principal. The way we currently work around this is by using a custom ActionFilterAttribute that we apply to ODataControllers:
    public class LoginAttribute : ActionFilterAttribute
    public override void OnActionExecuting(HttpActionContext actionContext)
    if (HttpContext.Current != null && HttpContext.Current.User as CustomPrincipal != null)
    return;
    // TEMPORARY (need more robust solution): pushing batch request principal to a sub-request thread
    if (actionContext.Request.IsBatchRequest() && actionContext.RequestContext.Principal is CustomPrincipal)
    Thread.CurrentPrincipal = actionContext.RequestContext.Principal;
    return;
    This works fine. But we are not sure this is the right way to handle it. Does anybody know what is the recommended solution for the problem described?

    Hi Dmitry Marcautsan,
    According to your description, the issue is related to the ASP.NET Web API, I'd suggest you post
    here to get better support. 
    Thanks for your understanding.
    Best Regards,
    Amy Peng
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Problem submitting batch request for sales order creation

    Hello experts,
    I have created a gateway service, implementing the CREATE_DEEP_ENTITY for order creation. I have tested my service with the Chrome Advanced Rest Client and it works fine with the following XML request:
    <?xml version="1.0" encoding="UTF-8"?>
    <atom:entry xmlns:atom="http://www.w3.org/2005/Atom" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
      <atom:content type="application/xml">
      <m:properties>
      <d:OrderId>0</d:OrderId>
      <d:DocumentType>TA</d:DocumentType>
      <d:CustomerId>C6603</d:CustomerId>
      <d:SalesOrg>S010</d:SalesOrg>
      <d:DistChannel>01</d:DistChannel>
      <d:Division>01</d:Division>
      <d:DocumentDate m:null="true" />
      <d:OrderValue m:null="true" />
      <d:Currency m:null="true" />
      </m:properties>
      </atom:content>
      <atom:link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/SOItems" type="application/atom+xml;type=feed" title="SALESORDERTSCH.SOHeader_SOItems">
      <m:inline>
      <atom:feed>
      <atom:entry>
      <atom:content type="application/xml">
      <m:properties>
      <d:OrderId>0</d:OrderId>
      <d:Item>000010</d:Item>
      <d:Material>C20013</d:Material>
      <d:Plant m:null="true" />
      <d:Quantity m:Type="Edm.Decimal">100.000</d:Quantity>
      <d:Description m:null="true" />
      <d:UoM m:null="true" />
      <d:Value m:null="true" />
      </m:properties>
      </atom:content>
      </atom:entry>
      <atom:entry>
      <atom:content type="application/xml">
      <m:properties>
      <d:OrderId>0</d:OrderId>
      <d:Item>000020</d:Item>
      <d:Material>C20014</d:Material>
      <d:Plant m:null="true" />
      <d:Quantity m:Type="Edm.Decimal">200.000</d:Quantity>
      <d:Description m:null="true" />
      <d:UoM m:null="true" />
      <d:Value m:null="true" />
      </m:properties>
      </atom:content>
      </atom:entry>
      </atom:feed>
      </m:inline>
      </atom:link>
    </atom:entry>
    Now that my service is working, I want to be able to call it from a SAP UI5/Javascript application. In order to process multiple items for one order header, I use the OData batch request. Here is my Javascript method that is being processed:
    executeOrderCreation : function() {
      // Retrieve model from controller
      var oModel = sap.ui.getCore().getModel();
      oModel.setHeaders(
      { "Access-Control-Allow-Origin" : "*",
      "Content-Type": "application/x-www-form-urlencoded",
      "X-CSRF-Token":"Fetch" }
      // Define data to be created
      var headerData = {
      OrderId : "0",
      DocumentType: "TA",
      CustomerId : "C6603",
      SalesOrg : "S010",
      DistChannel : "01",
      Division : "01",
      DocumentDate : null,
      OrderValue : null,
      Currency : null,
      varItemData1 = {
      OrderId : "0",
      Item : "000010",
      Material : "C20013",
      Plant : null,
      Quantity : "100.000",
      Description :null,
      UoM :null,
      Value :null,
      varItemData2 = {
      OrderId : "0",
      Item : "000020",
      Material : "C20014",
      Plant : null,
      Quantity : "100.000",
      Description :null,
      UoM :null,
      Value :null,
      var batchChanges = [];
      oModel.refreshSecurityToken(function(oData, oResponse){
      alert("Refresh token OK");
      }, function() {
      alert("Refresh token failed");
      }, false);
      oModel.read('/SOHeaders/?$Batch', null, null, false, function(oData, oResponse) {
      // Create batch data
      batchChanges.push(oModel.createBatchOperation("SOHeaders", "POST",headerData ));
      batchChanges.push(oModel.createBatchOperation("SOHeaders", "POST",varItemData1 ));
      batchChanges.push(oModel.createBatchOperation("SOHeaders", "POST",varItemData2 ));
      oModel.addBatchChangeOperations(batchChanges);
      // Submit changes and refresh the model
      oModel.submitBatch(
      function(oData) {
      oModel.refresh();
      function(oError) {
      var error = oError;
      alert("Read failed" + error);
      false);
      }, function() {
      alert("Read failed");
    The result is when I submit the batch, I have an error saying: The following problem occurred: no handler for data -
    Am I doing right in the batchChanges creation ? (Header then items)
    Why am I facing this error ?
    Any help would be greatly appreciated.
    Thanks and regards,
    Thibault

    Hi,
    you should also have '/' before collection name so that it will be /SOHeader and as below.
      batchChanges.push(oModel.createBatchOperation("/SOHeaders", "POST",headerData ));          batchChanges.push(oModel.createBatchOperation("/SOHeaders", "POST",varItemData1 )); 
      batchChanges.push(oModel.createBatchOperation("/SOHeaders", "POST",varItemData2 ))
    Regards,
    Chandra

  • Delta request compression twice

    hi sdn,
    can we double compress the same request i already compressed the requst but i'm not able to find out tik mark against that reqst but job is completed successfully can we compress it again
    regards
    rubane

    All your job log says that compression finished. And you also checked that data has been successfully moved out of F table to E table.
    I am assuming that you have refreshed your session properly.
    Could you also try out the following things.
    Go to Manage -> Environment -> Complete Check of request ID -> Click on 'Complete availability check of all request IDs in the info-cube'.
    Go to transaction RSRV -> Perform few checks for Info-Cube
    Let me know the findings.
    Soumya

  • Acrobat repeatedly crashes in batch mode - compress / OCR

    Hi.
    I have c. 2000 scanned documents, each of 300 - 700 Mb.
    I am using Acrobat 11.
    The software repeatedly crashes in batch mode (either when compressing the file - ie saving not too lossy, and in a format accessible only to later versions of Acrobat - I think I pick version 9, or when OCR using ClearScan).
    The sensible thing then would be to run each document one at a time using some kind of scripting software, but alas it does not seem so easy to control OCR that way on Windows.
    Any suggestions?
    Is it churlish of me to expect that a product as mature as Acrobat should just work in its basic functionality?  From searching around on the web, it seems that I am not the only one having this problem.
    Thanks.

    2000 in one go -- you need a server product rather than a desktop product (aka Acrobat).
    Acrobat can accomplish the task on a smaller population of files.
    What number? You'd determine that via trial and error. Perhaps 50 at a whack; or more; or less.
    Applications in Windows tend to not manage memory/resources so well. Acrobat is no exception.
    So you may find it useful to reboot the box periodically.
    Be well...

  • Office 365 Streaming Notifications, "One or more subscriptions in the request reside on another Client Access server."

    Hello all,
    I am maintaining a part of our product that requires monitoring mailboxes for events.  This is currently being done by using streaming connections for getting the notifications.  Our solution has been successful for situations with smaller numbers
    of mailboxes, ~200 or less.  However we are seeing some issues when scaling up to say, 5000 mailboxes.
    The error and the sequence leading up to it are as follows:
    Make an Exchange Service Account.
    exchSvc.ConnectionGroupName = someGroupName;
    add to the httpheaders ("X-AnchorMailbox", userSmtp) and ("X-PreferServerAffinity", "true");
    create a new impersonated UserId for the userSmtp address that is our anchor mailbox.
    set the Exchange Service account ImpersonatedUserID to the one we just made.
    ExchangeServiceAccount.SubscribeToStreamingNotifications(new FolderId[] { WellKnownFolderName.Inbox }, _mailEvents);
    to this point everything was successful, saw no error messages.
    we create a second impersonated UserID for a different mailbox, and repeat the process above from that step forward.  Upon the final step, subscribing to the streaming notifications we get the error:
    Exception: Microsoft.Exchange.WebServices.Data.ServiceResponseException: One or more subscriptions in the request reside on another Client Access server. GetStreamingEvents won't proxy in the event of a batch request.
    This is only the second subscription that we are trying to add to this connection, and it is to a different mailbox than the first.
    Can anyone please help point me to where this is going wrong?

    >> Is there a good way to verify the number of subscriptions in a group?
    Not that I know of you should be tracking this in your code there are no server side operations in EWS to even tell you if there are active subscriptions on a mailbox.
    >>The error I am getting is on the second subscription in a new group, just after doing the anchor mailbox so I don't think we are hitting the 200 limit. 
    It's hard to say without seeing your code but it sounds like there is problem with your grouping code. One way to validate this is that with every request you make with the EWS managed API there is a
    RequestId header http://blogs.msdn.com/b/exchangedev/archive/2012/06/18/exchange-web-services-managed-api-1-2-1-now-released.aspx
    you should be able to give that RequestId to the Office365 support people and they should be able to check the EWS Log on the server and tell you more about what's happening (it maybe server side bug). Something doesn't quite add up in that the X-BackEndOverrideCookie
    is what ultimately determines what server the request ends up at and the error is essentially telling you its ending up at the wrong server (have you looked at the headers on the error message?). Is it always one group of users that fails have
    you tried different groups and different combinations etc.
    Cheers
    Glen

  • PAYMENT BATCH를 MANUAL하게 CANCEL 하는 SQL

    제품 : FIN_AP
    작성날짜 : 2004-05-17
    PAYMENT BATCH를 MANUAL하게 CANCEL 하는 SQL
    =====================================
    PURPOSE
    고객사에서 간혹 Payment Batch의 Status 가 Formatting, Confirming 으로
    멈추어 있어서 고객이 다시 Cancel을 할 경우 Cancelling 상태로 머물면서 Payment Batch Request 가 진행이 되지 않을 경우가 있다. 이때 해당 Concurrent Request 는 취소하고 해당 Payment Batch에 대해서 Datafix Script 을 통하여 Status 를 Cancelled 로 변경할 수 있도록 한다.
    Problem Description
    Payment Batch Stauts 가 Cancelling 상태에서 변하지 않는다. Concurrent Request 는 끝나지 않고 계속 Hang 상태에 머물러 있다.
    Workaround
    Solution Description
    다음 sql 문을 이용하여 직접 cancelled 상태로 변경한다.
    1) DELETE from ap_checkrun_conc_processes_all
    where checkrun_name = '<payment batch name>';
    2) DELETE from ap_selected_invoices_all
    where checkrun_name = '<payment batch name>';
    3)     DELETE from ap_selected_invoice_checks_all
         where checkrun_name = '<payment batch name>';
    4) DELETE from ap_checkrun_confirmations_all
    where checkrun_name = '<payment batch name>';
    5)     UPDATE ap_inv_selection_criteria_all
         SET status = 'CANCELED'     
         where checkrun_name = '<payment batch name>';
    status update 시 canceled 라고 반드시 입력해야 함에 주의한다.
    CANCELLED 라고 L을 2번 입력하면 취소되었다고 인식 하지 않는다.
    Reference Documents
    143668.1

  • Financial Reporting batch runs without being scheduled when services restart

    Hi,
    hopefully someone may have come across this before:
    I set up a scheduled batch yesterday (workspace 11.1.1.1.3)  pointing at a batch of 2 large reports. As part of the testing I ran it at a scheduled time during the day and it worked perfectly. However the following morning it ran the batch again even though there was nothing in the scheduler (and I checked the scheduled job and it still said last run time 14:29 the previous day). However the snapshots that it produces had been updated overnight. One thing to add is that we shutdown all the Hyperion Services (to allow for a system backup) and they had just come up when the reports ran (could see the batch request in the FR log). Is there a way of stopping this from happening i.e. some setting/config I need to change?
    Thanks,
    J.

    Refer :
    http://download.oracle.com/docs/cd/E12825_01/epm.111/fr_user/ch15s05s02.html
    for more details about command line scheduling

  • Compressing movie for the web

    Hello All!
    Im fairly new to director but have lots of experience with flash. Hope it will become handy
    I have eLearning movie that I plan to distribute over the internet but the published dcr is 22mb which is not ideal for preloading. I guess that I need to optimize all the images and audio files inside the movie. My target file size is about 3 to 5mb.
    I have some questions, ill be glad to get some inputs
    1. as I understand I can compress individual images using director or using external editor (like fireworks). Which way is the preferred one?
    2. Since I have hundreds of images to optimize I wonder if there is any script or xtra that knows to batch the compression task?
    3. Regarding to audio compression. Can I use external editor? Or I must delete the audio files and re-import them as mp4?
    Im using director 11.5 on mac.
    Thanks a lot
    Shay

    Hi again,
    I investigate the dir file and I discover lots of audio files in swa format. All of them contain makeover voice. Im pretty sure  they should be converted into mp3 or mp4. The big question is how do I do it?
    Remind you that i'm using director 11.5 on mac which has not compress audio option in publish settings.
    Thanks
    Shay

  • CSOM and Batch Insert Performance

    Can we Batch requests while inserting records into SharePoint lists?
    Did anyone notice any performance issues while  bulk inserting 75 rows of 15 columns per row into SharePoint 2013 using CSOM? 
    V

    Hi,
    You can use batch processing with csom. Ex.
    function CreateListItems(objMyArray) {
    var itemArray = [];
    var clientContext = SP.ClientContext.get_current();
    var oList = clientContext.get_web().get_lists().getByTitle('MyList');
    for(index in objMyArray){
    var curObject = itemArray[index];
    var itemCreateInfo = new SP.ListItemCreationInformation();
    var oListItem = oList.addItem(itemCreateInfo);
    oListItem.set_item('Title', curObject.title);
    oListItem.update();
    itemArray[i] = oListItem;
    clientContext.load(itemArray[i]);
    clientContext.executeQueryAsync(onQuerySucceeded, onQueryFailed);
    And it goes well.(No Performance issue)

  • Cannot submit batch error!

    When I go to submit a batch for compression, I get an error that says:
    Cannot submit batch
    Unable to connect to background process.
    Is there a way to fix this?
    I use Compressor 2.

    It says when I run it:
    Resetting Qmaster...
    Checking file permissions...
    /Applications/Apple Qadministrator.app does not exist. You are advised to reinstall Compressor / Qmaster.
    /Applications/Apple Qmaster.app does not exist. You are advised to reinstall Compressor / Qmaster.
    Removing temporary spool data...
    Trashing preferences...
    Trying to start Qmaster...
    Repair complete.
    Well, I cannot unistall the program at the moment because actually, I got it from my brother who got it from a friend. What else can I do?

Maybe you are looking for

  • How do you install "Twixtor" plug in for FCP

    Hello, I think I have installed it properly, I'm not to sure... In FCP i click on "EFFECTS/Video Filters/RE: Vision Plug-in/ and I see Twixtor 4 TWixtor 4.5 pro Twixor 4.5 vector but when i click it nothing happens... shouldn't a box pop in the viwer

  • Acroread -landscape option not working for Reader 8.1.3 - Solaris 10

    OK I've read the other posts in which others are having this issue as well. I'm hoping this has been fixed or someone can tell me what we need to do to get this working. This use to work with Solaris 9/Reader 5 but when we upgraded to Solaris 10/Read

  • Graphics failure in my G5 iMac

    Today my G5 iMac (serial #QP5160FQPNY) would boot up but the screen is reticulated with no Finder screen detail - screen is a solid color with a small checkerboard pattern throughout. I tried shutting it off, disconnecting all peripherals and the reb

  • How to attach a custom transaction to a user menu

    Hi all, I need some help on how to attach a transaction code to a user menu. I've checked the forum but I could not find the exact instructions on how to do it. This is how the user menu should look like: User menu for Ricky Orea '->Manager Catalog C

  • Backup / APS Solution

    Good afternoon. I have been reading the archives, and there are numerous questions about backup, but nearly every one lacks enough information for the question to get answered, so I am going to try again. I have an X-serve with one 80GB drive. We use