Calc time issue.

Hello,My Calc script is taking little bit longer to complete. I am doing the following fix in my calc script. FIX (Actual, @RELATIVE("P&L Hierarchy",0), @RELATIVE(Headcount,0))     FIX(@DESCENDANTS ("Fiscal Year"))          Amount (          IF (Amount == #Missing)               IF (@ISMBR(&Fiscal_Year))                    IF (@ISMBR(Aug:&Fiscal_Month))                         IF (@ISMBR(AUG))                              PreviousHeadcount = @SHIFT("Amount"->Jul, 1, "Fiscal Year");                         ELSE                              PreviousHeadcount = @SHIFT(Amount,-1);                         ENDIF                         IF (PreviousHeadcount == #Missing)                              Amount = #Missing;                         ELSE                              Amount = 0;                         ENDIF                    ENDIF               ELSE                    IF (@ISMBR(AUG))                         PreviousHeadcount = @SHIFT("Amount"->Jul, 1, "Fiscal Year");                    ELSE                         PreviousHeadcount = @SHIFT(Amount,-1);                    ENDIF                    IF (PreviousHeadcount == #Missing)                         Amount = #Missing;                    ELSE                         Amount = 0;                    ENDIF               ENDIF          ENDIF )     ENDFIXENDFIXIs this possible to do the above in the load rule. So I can eliminate this process from the calc to improve the calc time?Thanks in Advance.Ricky [email protected]

As the discussion is going about @CURRMBR() function, I would like to know are there any better function you could use instead of it to get the desired result especially when you want to find out where is the user making changes to the values and do some calculations on those cells. Is there a better way to catch that in Hyperion that would make your scripts more efficient and fast especially when you don't know the value to hard code it for calculation?
I do have some of my scripts/BR where I have used this function a couple of times, and it bothers me to think if that is going to make them inefficient and slow.
~ Adella
Edited by: Adella on Oct 21, 2011 11:25 AM

Similar Messages

  • Slow calc time with SET CREATEBLOCKONEQ OFF for block creation

    Hello everyone,
    I have a problem with the slow execution of one of my calc scripts:
    A simplified version of my calc script to calculate 6 accounts looks like this:
    SET UPDATECALC OFF;
    SET FRMLBOTTOMUP ON;
    SET CREATEBLOCKONEQ ON;
    SET CREATENONMISSINGBLK ON;
    FIX (
    FY12,
    "Forecast",
    "Final",
    @LEVMBRS("Cost Centre",0),
    @LEVMBRS("Products",0),
    @LEVMBRS("Entities",0)
    SET CREATEBLOCKONEQ OFF;
    "10000";"20000";"30000";"40000";"50000";"60000";
    SET CREATEBLOCKONEQ ON;
    ENDFIX
    The member formula for each of the accounts is realtively complex. One of the changes recently implemented for the FIX was openin up the cost center dimension. Since then the calculation runs much slower (>1h). If I change the setting to SET CREATEBLOCKONEQ ON, the calculation is very fast (1 min). However, no blocks are created. I am looking for a way to create the required blocks, calculate the member formulas but to decrease calc time. Does anybody have any idea what to improve?
    Thanks for your input
    p.s. DataStorage in the member properties for the above accounts is Never Share

    MattRollings wrote:
    If the formula is too complex it tends not to aggregate properly, especially when using ratios in calculations. Using stored members with member formulas I have found is much faster, efficient, and less prone to agg issues - especially in Workforce type apps.We were experiencing that exact problem, hence stored members^^^So why not break it up into steps? Step 1, force the calculation of the lower level member formulas, whatever they are. Make sure that that works. Then take the upper level members (whatever they are) and make them dynamic. There's nothing that says that you must make them all stored. I try, wherever possible, to make as much dynamic as possible. As I wrote, sometimes I can't for calc order reasons, but as soon as I get past that I let the "free" dense dynamic calcs happen wherever I can. Yes, the number of blocks touched is the same (maybe), but it is still worth a shot.
    Also, you mentioned in your original post that the introduction of the FIX slowed things down. That seems counter-intuitive from a block count perspective. Does your FIX really select all level zero members in all dimensions?
    Last thought on this somewhat overactive thread (you are getting a lot of advice, who knows, maybe some of it is good ;) ) -- have you tried flipping the member calcs on their heads, i.e., take what is an Accounts calc and make it a Forecast calc with cross-dims to match? You would have different, but maybe more managable block creation issues at that point.
    Regards,
    Cameron Lackpour

  • Calc time vs. defragmentation

    I have a database with an average cluster ratio of .44. If I export and reload my data, it will go to 1.0, but as soon as I calc it goes back to .44.Under my current data settings, this calc takes a mere 5.7 seconds to run and retrevial time is fine. In an effort to improve the cluster ratio, I played with my dense/sparse settings, changed my time dimension to sparse, and was able to get a .995 cluster ratio after calculation; the problem is now the calc script ran for 127 seconds which is 22x longer.I know that either calc time is minimal by Essbase standards, but I'm still curious which way is "optimal". I would think it is always best to take the enhanced performance over the academic issue of cluster ratio, but I'm concerned at what point this becomes more than an academic question. How imporant is the cluster ratio and what kind of implications are there for having a database that is more fragmented? Are there other things besides calc and retrieval time that maybe I'm not seeing on the surface that I should be concerned with. Since defragmentation should improve performance is it worth it to sacrifice some performance for less fragmentation? Of course as this database grows this will become more of an issue.Any input, thoughts and comments would be appreciated.

    Just my humble opinion: Everybody's data has a different natural sparsity and rather than think in terms of 'fragmentation', think in terms of the nature of your data. If you made EVERY dimension sparse except for Accounts, and had only one member in Accounts, your database would consist solely of single-cell datablocks that are 100% populated - as dense as you can get. The trade-off is that you will have a HUGE number of these small, highly compact datablocks and your calc times would be enormous. As a general rule, you can take each of your densest dimensions in turn and make them "dense" in the outline until your datablocks approach 80k in size. The tradeoff is that not all cells in each datablock will be populated, but you'll have fewer datablocks and your calcs will zoom. Your goal is not to simply minimize the number of datablocks or to minimize the datablock size or to maximize the block density. You goal is to reach a compromise position that maximizes the utility of the database.A good approach is to hit a nice compromise spot in terms of sparse/dense settings and then begin optimizing your calcs and considering converting highly sparse stored dimensions to attributes and such. These changes can make a tremendous impact on calc time. We just dropped our calc time on a database from 14 hours to 45 minutes and didn't even touch the dense/sparse settings.-dan

  • Time issue- urgnt

    HI Expert !
    I m using this code .
    LTIME TYPE SY-UZEIT
    T1 TYPE SY-UZEIT VALUE 030000,
    T2 TYPE SY-UZEIT VALUE 050000,
    BUT WHEN I M USING T1 AND T2  in loop ( if  statement ),ITS value r
    T1 =  082000.
    t2  = 131100 .
    if  t1(6)  t2(6 )
    ltime (6 ).
    then ok . but in this i wont be able to make comparison in if statement
    how i can resolve this time issue
    i dont want to include timings between 3.00 a.m and 5.00 ,
    for this i have used this code but its not updating data for any employee , is this code correct or not.
    OPEN DATASET PA_FILE FOR INPUT IN TEXT MODE ENCODING DEFAULT.
    DO.
    READ DATASET PA_FILE INTO REC.
    CLEAR WA_PUNCHES.
    IF SY-SUBRC <> 0.
    EXIT.
    ENDIF.
    WRITE: / REC.
    if NOT ( WA_SCORE-LTIME GE T1 AND WA_SCORE-LTIME LE T2 ) .
    AND
    ***( WA_SCORE-TERID NE TR3 AND
    ***WA_SCORE-TERID NE TR4 AND
    ***WA_SCORE-TERID NE TR5 AND
    ***WA_SCORE-TERID NE TR6 AND
    ***WA_SCORE-TERID NE TR7 AND
    ***WA_SCORE-TERID NE TR8 ) .
    WA_SCORE-PERNR = REC+0(8) .
    WA_SCORE-LDATE = REC+9(8) .
    WA_SCORE-LTIME = REC+18(6) .
    WA_SCORE-CANID = REC+25(8) .
    WA_SCORE-TERID = REC+34(4) .
    APPEND WA_SCORE TO IT_SCORE.
    ELSE .
    EXIT .
    ADD 1 TO COUNT.
    ENDIF .
    ENDDO.
    plz help me . its very imp. program .
    thanks

    Hi,
    Pass values in quotes
    T1 TYPE SY-UZEIT VALUE '030000',
    T2 TYPE SY-UZEIT VALUE '050000',
    Can you please check once again after making this change in debug mode, values of T1, T2 and LTIME.
    ashish

  • Time issue seems odd to me

    I put a secondary domain and po at a remote location - running 8.03 on Sles/Oes and moved all of the "local" users to that po.
    while everything seems to run ok we have a weird time issue.
    Mail into and out of the new mailboxes shows a three and a half hour time difference both in the date column in the groupwise client and in the printed header of the email.
    The odd thing is that if I go to the properties of the email - the Creation date and the File date and time are both correct.
    I can't see any time issues with the servers (primary or secondary) and especially something that would be 3:30 minutes off
    there is a timezone difference between the pri and sec but that's just 1 hour.
    Any thoughts?
    As a workaround I have had my users add the creation column to see when they actually received the email as our clients have time critical projects.
    Thanks
    Dennis

    Danita,
    Thanks for the reply - Oh how I wish it were that easy - I've checked and double checked the PO object and the domain object - both say US eastern time.
    If I send email from a mailbox on the secondary to the primary - everything is correct (a 1 hour time difference - Eastern to Central)
    If I send mail from a mailbox on the primary to the secondary - it shows up right away, but says 3:30 minutes later than it should (in Date/Time, but not in Created)
    If I send email from a mailbox on the secondary to a GMail account - the header shows the 3:30 offset, but the properties of the mail shows correctly.
    I would be happy to send email from that po to anyone that wants to see for themselves.
    Other than the PO Object and the DOM object - Where should I be looking for a time difference
    and how would it be getting UTC minus 30 minutes - - Is there a special timezone out there for Hy Brasil?
    The workstations all get their time settings from "time.windows.com" the windows default.
    Thanks for looking into this - I just hope that I can get it figured out because as small of an issue that I think it is (they get their email, it just looks like it gets read before they officially receive it)
    but it seems that this office uses the received time to track how long it takes them to do their work.
    Dennis

  • Real time issue

    Hi all,
    Can any body plz send me some FICO real time issue on [email protected]
    and plz tell me hw shld I prepare for interview
    Thanks & Regards
    Vaibhav

    Hi Balraj,
    In the normal practice, developers will try to find the similer infocube (as per the requirement) in the Business content. But always you will not be lucky to find such infocube in Business content. You need to create at your own to suite the business requirements. Regarding the characteristcs & key figure, it 's again depend on the requirements. Calculated object can be assign as key figure like. Sales qty, revenue & net sales etc. where as Dimesion (characteritcs) will be purely depends on the reporting point of view. Like Customer, Material & Sales Document type etc.
    Hope this will help you !
    Thanks,
    Sanjiv

  • Execution Time Issue

    Help Please!!!
    I've been searching for an execution time issue in our application for a while now. Here is some background on the application:
    Collects analog data from a cDAQ chassis with a 9205 at 5kHz
    Data is collected in 100ms chunks
    Some of the data is saved directly to a TDMS file while the rest is averaged for a single data point. That single data point is saved to disk in a text file every 200ms.
    Problem: During operation, the VI that writes the data to the text file will periodically take many hundreds of milliseconds to execute. Normal operation execution times are on the order of 1ms or less. This issue will happen randomly during operation. It's usually many seconds between times that this occurs and it doesn't seem to have any pattern to when the event happens.
    Attached is a screenshot of the VI in question. The timing check labeled "A" is the one that will show the troubling execution time. All the other timing checks show 0ms every time this issue occurs. I simply can't see what else is holding this thing up. The only unchecked subVI is the "append error call chain" call. I've gone through the heirarchy of that VI and ensured that everything is set for reentrant execution. I will check that too soon, but I really don't expect to find anything.
    Where else can I look for where the time went? It doesn't seem to make sense.
    Thanks for reading!
    Tim
    Attachments:
    Screen Shot 2013-09-06 at 9.32.46 AM.png ‏87 KB

    You should probably increase how much data you write with a single Write to Text File.  Move the Write to Text File out of the FOR loop.  Just have the data to be written autoindex to create an array of strings.  The Write to Text File will accept the array of strings directly, writing a single line for each element in the arry.
    Another idea I am having is to use another loop (yes another queue as well) for the writing of the file.  But you put the Dequeue Element inside of another WHILE loop.  On the first iteration of this inside loop, set the timeout to something normal or -1 for wait forever.  Any further iteration should have a timeout of 0.  You do this with a shift register.  Autoindex the read strings out of the loop.  This array goes straight into the Write to Text File.  This way you can quickly catch up when your file write takes a long time.
    NOTE:  This is just a very quick example I put together. It is far from a complete idea, but it shows the general idea I was having with reading the queue.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Write all data on queue.png ‏16 KB

  • Time issues? time zone settings appear in two places

    If you leave the default "set time and date automatically", you see the time zone support below and you can check it or uncheck. You can then set your time zone there. But if you uncheck the automatic date and time, you get another time zone choice and the default is Cupertino, of course. I wonder if that's the problem with some of these time issues.
    I've since unchecked automatic date and time, set my city's time zone, then rechecked automatic. The only weirdness I've seen was with an excel spreadsheet where the time was correctly shown on my mac, but not on the iphone. Unfortunately I don't have that sheet anymore or I'd try again to see if it works now.
    Just a thought.

    I noticed the same thing and did the same thing you did because my time was off by several hours each time I turned the phone on...after it being off overnight. When it was off by only 1 hour I could turn the Auto OFF then 'tap' the timezone (already set to the correct city) and the time would instantly correct itself. Then I'd turn Auto back ON.
    Turns out that my time issue was related to having the SIM PIN enabled. Once I disabled that, the time's been correct every day.

  • Real  time issues

    hi there,
    can any one share some of the issues that have been dealt with in real time. i.e.during blue print stage, <b>especially in realisation stage</b>, final preparation stage
    puhlease answer this question immediately

    Hi Medasani,
    Real time issue you can get it from the Sap support consultants. Hence, getting all the information is very difficult.
    In what basis you are asking let me your requirements.
    For Example :- Tolerance limits is has to increase to Rs.1 to Rs.100. for employees and vendor / customer. This is the issue from client.
    How you will decide that to increase.
    First you should understand the clinet work flow and get the approval from client end (core Team) then you have to increase meanwhile you have to decide the % also.
    Oba3 /Oba2
    Warm Regards,
    Sivakumar Sathiyamoorthy

  • [SOLVED] Yet another time issue...

    Hi there,
    I'm having some crazy, hair pulling, head banging UTC time issues on both my dual boot (Arch & Windows 8) desktop and single boot laptop.
    I've been searching and following a mulitude of old threads and the official Wiki guide, however, I simply cannot get UTC time to stay correct between reboots. I've installed Arch a fair few times now, both before and after the switch to systemd and in dual and single boot setups and have never had time related issues like this before...
    Here's a run down of the whole sorry story....
    Fresh install - set time with hwclock --systohw utc as suggested in the beginners guide.
    First boot - time an hour out - huh? cat /etc/adjtime and ls -l /etc/localtime - everything as it should be... UTC and symlink to /usr/share/zoneinfo/Europe/London. Nonetheless I duly follow the Time guide on the wiki, get to know timedatectl a bit, and soon after, time is set to be correct. Great.
    Reboot. Time an hour out. Whaaaa??. Start googling, finding a plethora of bbs links to others with similar issues, decide to use NTP. Install, ntpd -qg. All good again. Time right. Phew...
    Reboot. Nope. Arrrrgggghhhh. (Bare in mind - at this point I haven't even booted the Windows 8 disc, and I went through the exact same process on my Arch only laptop...)
    More googling, thread reading. Find out that hwclock and ntp might be in conflict. Delete /etc/adjtime, reinstall tzdata, re-follow Time article to the letter and reenable NTP to start at boot.
    Reboot - time is foward an hour yet again. Bang head repeatedly on desk. Then suddenly - 10 mins or so after boot - NTP kicks in, and time is magically goes back an hour to the correct time! Surely NTP should be doing this early in the boot process, not 10 minutes after? But OK, at least something is happening...
    Next I dare to boot into Windows, and yup, time an hour out. Expected, but frustrating nonetheless. Add the registry tweak as suggested in the guide, and turn off the Windows time synchronisation. During which I noticed that Windows 8 time is set to UTC by default, not localtime, which is what the wiki says. Is this a Windows 8 thing?
    I can't help thinking, that if this is the case, and the Wiki advise is written under the assumption that Windows time is always localtime, that perhaps this is the route of my problem....
    Anyway, Windows time all good now, reboot Arch, time again wrong untill NTP finally kicks in - sigh. Reboot to Windows. Time an hour back. Gaaaahhhh - WTF?!! reset it to be correct. Try again. Same thing. Oops, forgot to mention that on each reboot, I check the BIOS clock, which remains persistently correct during the whole debarkle...
    Ok, so I try without the registry hack and turn Windows sync back on. Still fucked. I try various combinations of the two, I try resetting Arch time over and over again. With NTP - late sync. With hwclock only, or both hwclock & NTP - totally fucked! Eventually, I give up on UTC in Arch, set it localtime, delete registry key, turn Windows time sync back on, and thus far all good......
    TL;DR - couldn't get hwclock to set UTC time correctly across reboots. NTP worked (sort of), but after each reboot it would take ten minutes for the time to sync and the clock to move back an hour to the correct time. Localtime just works...
    My question is - why is the wiki (and the timedatectl status output for that matter) - so adamant that we should use UTC? From what I can gather, as long as I boot Windows around the DST time. or manually move the clock forward or back an hour - all should be well, no? Or are there other issues that I may run into that I've missed? The thing I'm most concerned about is data corruption due to timestamp issues...
    Also, why does the wiki repeatedly say that Windows uses localtime, when this doesn't appear to be the case in Eight? Does the wiki need updating or is Windows lying to me? I know which I'd put my money on ;-)
    Finally can anyone explain why my time was always an hour forward after a reboot, even when the time was set correctly before, the BIOS showed the correct time, and NTP was in use?
    Sorry for the crazy long posting, but this issue has been driving me totally batty!! Any light-shedding greatfully appreciated :-)
    Last edited by knowayhack (2013-08-10 15:24:09)

    Ok, so I set the clock back an hour in the BIOS, booted Windows - time right - YAY.
    Rebooted Arch - time right. OMFG - pure joy!!!
    I have to say, I still find it odd that this is how it works, and would like to confirm that things are looking as they should in the output from the following commands....
    toby@archy ~ > timedatectl status
          Local time: Sat 2013-08-10 15:42:56 BST
      Universal time: Sat 2013-08-10 14:42:56 UTC
            Timezone: Europe/London (BST, +0100)
         NTP enabled: n/a
    NTP synchronized: no
    RTC in local TZ: no
          DST active: yes
    Last DST change: DST began at
                      Sun 2013-03-31 00:59:59 GMT
                      Sun 2013-03-31 02:00:00 BST
    Next DST change: DST ends (the clock jumps one hour backwards) at
                      Sun 2013-10-27 01:59:59 BST
                      Sun 2013-10-27 01:00:00 GMT
    toby@archy ~ > sudo hwclock --debug
    sudo: timestamp too far in the future: Aug 10 16:39:24 2013
    We trust you have received the usual lecture from the local System
    Administrator. It usually boils down to these three things:
        #1) Respect the privacy of others.
        #2) Think before you type.
        #3) With great power comes great responsibility.
    [sudo] password for toby:
    hwclock from util-linux 2.23.2
    Using /dev interface to clock.
    Last drift adjustment done at 1376148264 seconds after 1969
    Last calibration done at 1376148264 seconds after 1969
    Hardware clock is on UTC time
    Assuming hardware clock is kept in UTC time.
    Waiting for clock tick...
    ...got clock tick
    Time read from Hardware Clock: 2013/08/10 14:43:20
    Hw clock time : 2013/08/10 14:43:20 = 1376145800 seconds since 1969
    Sat 10 Aug 2013 15:43:20 BST  -0.953640 seconds
    Is it OK to now reinstall NTP to keep the clock in sync on Arch, and are the Windows reg tweaks still necessary on Windows 8?
    Thank you guys SOOO much for all the unbelievabley speedy help on this issue - you have no idea how much this has been stressing me out!!
    Last edited by knowayhack (2013-08-10 14:54:31)

  • Communication channel availablity time issue

    Hi All,
    We want to schdule communication channel every  mid night ,but when i tried to give the time there is a diffrence of about 6 hours .so evry time if in need to schdule need to calculate and schdule the time ,checked out the server timing it is similar to system time ,even messages in MONI also showing the correct time ,do we need to check anything in the adpter engine to correct this time diffrence.
    Appreciate reply on this.
    Thanks,
    Madhu

    Madhu
    Iu2019m not sure if this is exactly your issue, but please see the following links concerning time issue in XI.
    Time setting is differnt in sxmb_moni and RWB
    Different time stamp in sxmb_moni and RWB
    Hope this helps.
    PJ

  • Reduce Calc Time

    Afternoon everyone,
    We load data into our cube monthly, and when running the calc on the database it can take between 2/3 days to complete. I appreciate that calc time can be determined by a wide variety of factors (number of dense/spare dims/members etc) - but looking at things from a system resource view:
    The server has 8 CPU's.
    With total memory = 4194303 (according to server information within Application Manager)
    When calcing, approx 1500000 of memory is used.
    The start of the calc script defines the following parameters: 'SET CALCPARALLEL 4; SET UPDATECALC OFF;'
    Would increasing the 'SET CALCPARALLEL' parameter from 4 to 6 be a viable approach to trying to reduce calc time (especially given the amount of available resource on the server)??
    The server wont be used for anything else during the calc.

    CL wrote:
    Are you running 64 bit or 32 bit Essbase?
    32 bit maxes out at 4 CPUs for parallel calc; 64 bit can go to 8.
    You might want to look at the number of task dimensions set for parallel calculations.
    See: http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/html_esb_techref/calc/set_calctaskdims.htm
    And your calculator cache is going to impact parallel calcs as well.
    All of this can go up in smoke if you have calculations that require Essbase to calculate in serial, such as cross dimensionals.
    There are lots of other possibilities re performance.
    1) Could the SAN/local drives be faster?
    2) Do you need to calc the whole database (I have no idea what your db is, only that you mention a monthly calc -- is it for just one month?)
    3) Partitioning the db by month<--This is probably a really good place to look although nothing is free.
    4) Going to ASO
    There are others as well.
    I appreciate that thie above four thoughts are beyond your question, they're just food for thought.
    Regards,
    Cameron LackpourASO should be an option. It is much much faster rollup than BSO.

  • Calc time estimates, anyone?

    I have a 13 dimension database that yields about 20GB of data after calculation.Block size is about 18KThe only calculation I run is a CALC DIM statement (I'd run a CALC ALL but I fix on members from some dimensions). The CALC DIM just consolidates each dimension (no member formulas and all consolidation tags are +)For my sparse dimensions, I have deep hierarchies (two, though, are flat).The database is reset and reloaded with data prior to calculation. Calculation time for one year's worth of data is 6 hours.That can't be right.Without necessarily telling me how to do it, can someone tell me what's the best calc time I should be able to achieve with some performance-tuning?

    The calc time you should be looking after, shoud not exceed 1 hour.There is a way which you can bring down dramatically your calc time -however seeing the number of dims and data volume and considering your hardware, thats just about wright the time.OK, your best card in situations like this, is to use the Dynamic Calc atibute- resource, for upper level sparse members, it works miracles if you can do it wright- not affecting your reporting performance.However maybe there are dependencies in formulas (avrg) etc.The trick is to use the dyn calc in these formulas (make them dense), so they can calculate at the end.Remember the calc order in dyn calc is the oposite of regular agregations.Believe me it can make a huge difference, i use it in situations where depending the polarity of sparse dimension, cuted down the size tenfold and consequently calc times.Unfortunatelly noone is teaching thoroughly this great resource (dyn calc), so do not be scared at the beginning.You can contact me at [email protected] Yannis

  • SEM-BPS Real times issues

    Hi Gurus,
    I'm new to SEM-BPS Real times issues also some interview questions on SEM BPS. How we want to answer please forward to my email id: [email protected]
    thank regards
    Suresh

    Have you tried posting to th eBusiness Planning forum which is focused on SEM/BPS?  It's under the Business Intelligence category.

  • Please tell me some real time issues faced by u in SCRIPTS

    please tell me some real time issues faced by u in ur experience

    First understand SAP scripts are client dependent..changes automatically not reflects in all development client once it is change in once client.
    We mostly see alignment issue in SAP script it is mostly because of printer settings etc...
    Also we may have few problems when printing unicode characters.. for printing double digits characters... like Japanese, chinese, korean etc.. we should have a printer with unicode enabled..
    Regards,
    SaiRam

Maybe you are looking for

  • Error opening PDF files on the network

    I have documents in PDF format on my internal network, but when I open them directly from the network using the Adobe Reader, I get the following message: "There was an error opening this document. The network path was not found". But this occurs in

  • Error in DTP while loading master data

    Hi Experts, When Iam loading 0MAT_SALES attributes master data, the load is complete till PSA. But whicle executing DTP to reach its further data target "info object" I getting an error Data record 7632 ('030001DF2116VBG20V-M9‚B '): Version '030001DF

  • Control center problems in iOS 8.1.2

    Hi! I just wanted to see if anybody else have any problems with bringing up the control center in apps on the iPhone 6 Plus? On the homescreen and lockscreen I experience no problems bringing up the control center but when in apps, swiping from botto

  • Batchmanagement

    hi ' can any body tell me in detail what is LIFO AND FIFO in batch managment. Regds. ajay

  • Buying an iBook G3 [Please Help!!!]

    I am considering buying this iBook G3 from this website http://www.powerbookguy.com/xcart/catalog/iBook-G3-500MHz-CDRW-12-inch-p-17042.h tml I have a couple questions on if I should actually buy it 1. can I upgrade this OS to OS 10.3.9 (Panther) and