The signed data problem for script

Hi,
I encountered one problem about the signed of vaule when running script.
*PACCT   Account Type*_
S001   INC
S002   INC
V001   EXP
script : TEST.LGF
*WHEN  P_ACCT
   *IS  S001
       *REC(FACTOR=1,P_ACCT="S002")
       *REC(FACTOR=1,P_ACCT="V001")
*ENDWHEN
*COMMIT
I input the S001 with 100 LC and run the script as following scenario:
Scenario 1:
- include the TEST.LGF in the  Default.LGF
- create the input schedule to input 100 at S001
I will get  S002=100  and V001=100
Scenario 2:
- input the S001 = 100 into application first
- create datapackge to run the script TEST.LGF
- run the datapackage and verify "successful"
I will get the S002= -100  and V001= -100
Why did the same code got different signed ?
Is there any thing I missed ?
Please give me some advise.

Hi,
You havent missed anything. The System behaves in the way you have mentioned.
The processing of the values are diffferent in Default Logic and while processing through Data Manager package.
The Default Logic works with the exact data what we have keyed and the script executed through DM Package considers the backend data as the keyed data and process on them.
In Default Logic i tried even by setting the scope using XDIM_MEMBERSET for the account without keying the value for accounts. Even in that case it works fine like if Source has +100 the dest also gets +100.
But incase through DM package the output am getting is for source value of +100, destination gets -100.
This is really strange and i haven't noticed this long time.
Regards,
G.Vijaya Kumar

Similar Messages

  • The Application Data folder for Visual Studio could not be created

    Hi! My laptop was working fine but someone recently installed Malware Bytes to my laptop and supposedly found 4 problems and "fixed" them afterwards Visual Studio Professional 2013 and Chrome stopped launching. When I realized this happened I uninstalled
    Malwarebytes but it was too late. Every time I tried launching Visual Studio I would get a message that said something like: "Two or more components could not be found. Please reinstall". I did a "fix" on Visual Studio but this solved nothing
    only changed the message, the new message was "The Application Data folder for Visual Studio could not be created" so I uninstalled the program and all the extras I found on my Control Panel. Downloaded Visual Studio Community 2013 I'm trying to
    install it but I got the same exact message again. Why is this happening? What does it mean? Before I uninstalled Visual Studio Professional 2013 I looked for ways others solved it but nothing seemed to work. Please help. Thanks!

    Hello Danny,
    The Malware Bytes may have already modified some files/registry values/environment variable which is used by Visual Studio and may be the setup cannot create it for you again.
    Please try the workaround here in this blog:http://tutewall.com/application-data-folder-for-visual-studio-could-not-be-created/ and remember to backup your
    registry key before you do any actions to the registry table.
    Best regards,
    Barry
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • What are the master data tables for 'Plant'?

    Hi friends,
    What are the master data tables for 'Plant' that contain below data?:
    PLANT
    NAME1
    NAME2
    LANGUAGE
    HOUSE_NUM_STREET
    PO_BOX
    POSTAL_CODE
    CITY
    COUNTRY_KEY
    REGION
    COUNTRY_CODE
    CITY_CODE
    TIME_ZONE
    TAX_JURISDICTION
    FACTORY_CALENDAR
    Thanks a lot!

    Hi,
    Plz try out following tables for your requirement.
    TOO1W
    ADRC
    J_1IMOCOMP
    T001W :  werks TYPE t001w-werks, "PLANT
            name1 TYPE t001w-name1, "PLANT DESCRIPTION
            adrnr TYPE t001w-adrnr, "PLANT ADDRESS NUMBER
    ADRC:  addrnumber TYPE adrc-addrnumber,  "ADDRESS NUMBER
             str_suppl1 TYPE adrc-str_suppl1,  "STREET2
             str_suppl2  TYPE adrc-str_suppl2, "STREET3
             street TYPE adrc-street,          "STREET
             city1 TYPE adrc-city1,            "CITY
             post_code1 TYPE adrc-post_code1,  "CITY POSTAL CODE
             post_code2 TYPE adrc-post_code2,  "PO Box postal code
             tel_number TYPE adrc-tel_number,  "TELEPHONE NUMBER
             fax_number TYPE adrc-fax_number,  "FAX NUMBER
             str_suppl3 TYPE adrc-str_suppl3,  "STREET4
             location TYPE adrc-location,      "STREET5
             city2 TYPE adrc-city2,  
    For Tax Details :
    J_1IMOCOMP :   werks TYPE j_1imocomp-werks,
                              j_1icstno TYPE j_1imocomp-j_1icstno,
                              j_1ilstno TYPE j_1imocomp-j_1ilstno,
    Hope this will help.
    Regards,
    Archana

  • How to get the context data using java script in interactive forms

    Hi All,
    How to get the context data using java script in interactive forms by adobe,  am using web dynpro java
    thanks.

    Hi venkat,
    Please Refer this link.
      Populating one Drop-Down list from the selection of another Drop-down list
    Thanks,
    Raju.

  • How to determine the creation date/time for a file?

    The important operating systems maintain both a creation date/time and last modified date/time for files. But in the File class there is only a lastModified() method. How does one determine the creation date/time for a file?

    As far as i know, there is no way to know creation time, since it is a OS dependant information.

  • Cannot read the next data row for the data set

    Hi,
    My report runs fine when I view in VS, data shows fine when I run the query in the data window, but when I publish it to the server, I get the above error. I am running SQL server 2005 RTM and I have re-deployed the entire solution.
    Any ideas?

    Hi All,
    Upon investigation found that there was an issue with converting varchar value to datatime datatype in Stored Procedure and that is why report was throwing error.
    I ran same report in BIDS and got clear error message
    An error has occurred during report processing. (rsProcessingAborted)
    Cannot read the next data row for the dataset xxx. (rsErrorReadingNextDataRow)
    The conversion of a varchar datatype to a datetime datatype resulted in an out-of-range value
    Once I corrected this, It is working fine.
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Upon transferring to the cloud, all my previous mail says the same date, except for today and yesterday.  The rest says it came in on the same date I had to replace the hard drive 3 years ago.  Totally stumped.  It didn't do this on my mac...

    Upon transferring to the cloud, all my previous mail says it entered my box on the same date, except for today and yesterday.  Everything else says it came in on the same date on which I had to replace the hard drive 3 years ago.  Totally stumped.  It didn't do this on my mac...only when I moved.  I have bad feelings about being to fix this...Any suggestions?

    Get the new keyboard at eBay and replace by yourself may be cheaper way, though you're required higher skill and well know about MacBook Air.
    http://www.ebay.com/sch/i.html?_from=R40&_trksid=p2050601.m570.l1313.TR0.TRC0.H0 .Xmacbook+air+2011+keyboard&_nkw=macbook+air+2011+keyboard&_sacat=0
    https://www.youtube.com/watch?v=gLbasVD69xo

  • Creating the text data source for 0OI_LIFNR(TD: Carrier (Number of vendor a

    Hi
    Have to create the text data source for this characteristic. Basically it is the same as 0Vendor here.
    I want to make it delta enabled.
    Am creating it on the view BIW_LFA1T(same as 0vendor)
    Can anyone advise what can I use to make it delta enabled without any code.
    Can I use numeric pointer like the 0Vendor_text.
    Many Thanks and regards,
    Kate.

    when defining a data source through the admin web page, don't add '!' in front of the password.
    ! is needed only when configuring a data source in the mapViewerConfig.xml file.

  • Finding the Generic data source for vendor master.

    Hi,
    I am trying to find the generic data source for 0VENDOR Infoobject on R3. But I am not able to find the exact match. Can any one help me? I just started on BW!!.
    Thank you

    Hi Tippireddy,
        you want to extract the vendor information through the business content then go through the following navigation...
      T.code RSA5 -> MM-IO (Master data Materials Management in general) -> 0VENDOR_ATTR or 0VENDOR_LKLS_HIER or 0VENDOR_TEXT these three are the standard data source for vendor.
    Regards,
    PRK

  • What is the best data type for wallet application?

    Hi Friends..
    I want to know what is the best data type for wallet application..
    Assume that, i want to the Total of money saved digitally in Applet Wallet..
    And then if there's any transaction the Total of money which saved digitally in Applet Wallet would be Subtracted or Added depends on how much money that spent or saved..
    Which one is the best implementation of these scenarios :
    1. I save the User ID and Total of money in the Java Card, and then if there's any transactions, it would be added or subtracted directly and then saved again in the Java Card
    2. Or.. I save the User ID in the Java Card whereas Total of money in the Database, and then if there's any transactions, the ID would be read from the Card, and then select the Database based on that ID, and then Add or Subtract the money depends on how much money spent in the Transaction
    Please help me regarding this
    Thanks in advance

    Hi,
    Personally I would choose to store the total amount stored on the card. You could use two shorts (short[] perhaps) to store an integer (add more shorts to increase precision) and simply handle overflow your self. You could even look into using a third party library (or class) that treats a byte array as a big integer etc. There were some posts recently on floating point arithmetic that could be helpful for you since you will probably want to use decimals and JC does not natively support floats.
    Cheers,
    Shane

  • Why won't my pandora work when I'm not on wifi? I have the cellular data on for it but I never works. Also pandora isn't the only app of mine that does that.. What's wrong with it??

    Why won't my pandora work when I'm not on wifi? I have the cellular data on for it but I never works. Also pandora isn't the only app of mine that does that.. What's wrong with it??

    You're welcome.
    Voicemail is left at your carrier's server. That will continue to work unless you report your iPhone as lost or stolen with your carrier.
    You may never find it again and you can't if the iPhone remains offline or out of service which means the iPhone is powered off or doesn't have cellular reception.

  • Is there a way to force Adobe Reader to grab the sign on ID for a digital stamp?

    I work for a state Agency and we want to use digital stamps as our signatures on our internal documents. I have created the form in LiveCycle and I know that in Adobe Reader by default the stamp will use the person's sign on ID for the stamp and then the person adds their name and other information.
    However, if the person right clicks the stamp, and edits the Identity and puts in another sign on ID number, such as their supervisor's and stamps the document, there is no way to tell that both of those stamps, with different sign on IDs and names, were created by the same person.
    If there is a script I can enter in LiveCycle, that when the form is opened in Adobe Reader, that would lock the Identity field of a stamp created in Adobe Reader from allowing a change to the sign on ID, then that would solve our problem and maybe the problem for other state agencies wanting to follow suit.
    The digital signatures are even worse. I made one in my name, my supervisor's name and my dog's name, attached them all to a document, validated all the signatures and they look absolutely authentic. Why would Adobe make digital signatures like that? If we could just find some evidence within the data showing that all the signatures were applied by the same person or on the same computer, then we could use them. But the stamps at least grab that unique sign on number that we use and applies it to the stamp if the user doesn't alter it.
    I'm on a time crunch as we hoped to launch this after the first of the year but our attorneys are saying, "uh, uh" until something can be done to prevent fraud. We have over 3,000 people in our agency so EchoSign would be out of the question.
    I'd appreciate any suggestions.

    You will not get what you need using stamps, especially if you ever want to use dynamic XFA forms.
    The digital signatures that were signed with a self-signed digital ID won't fully validate unless the user chooses to trust the corresponding digital certificate first. You should only trust a digital certificate if you trust its source. The problem is using self-signed IDs will be difficult to use on the scale you're talking about.
    Digital signatures are the best approach for your needs. They can provide both nonrepudiation of document origin as well as document integrity. You just have to figure out how to implement a solution you can afford.
    You agency can become its own certificate authority and issue certificates to its employees. You'd just have to find a way to implement and manage such a system.

  • What are the sign-in settings for Jabber?

    Firstly, I do not have a business account with Cisco.
    We have downloaded the desktop Jabber program.
    I have now registered an account with Jabber and an account with Cisco.
    When I open Jabber Video on the desktop, it asks for a username.
    What is the Jabber username? The email sent to me from Jabber states: "Sign in with your email address or jabber email" - but this is not the username, because the program will not accept an @ sign in the username field.
    I have tried the username that I signed up to Cisco with - this is not it either.
    I see a notice stating "Connection rejected by server. Try logging in again"
    On the Jabber start up screen, I see "Sign-in settings". When I open this window, all fields (Internal Server, External Server, SIP Domain) are blank. Why are they blank? Could this be the problem? Are the data for these fields to come from my side or from Cisco?
    Unfortunately, Cisco tech support won't take my call because I do not have an account and therefore cannot set up a "Tech Support File".
    Appreciate all feedback.
    thanks

    Hi Stephen,
    To login, the internal server should be: https://boot.ciscojabbervideo.com/endpoint/configuration
    The external server and domain fields are blank.
    Your username only needs the prefix of your @jabber.com video address.  For example, if your video address is [email protected] then your username is "stephen."
    Unfortunately, there is only forum support for this free service.
    Regards,
    Jason

  • Error while activating the Master Data InfoSource for attributes

    Hai
    Im trying to load the attributes for master data characteristics .
    I created the InfoSource and when im trying to activate it .
      Then it gives the following error like below
    Diagnosis
    An error occurred in the program generation:
    Program / Template:   RSTMPL80
    Error code / Action: 6
    Row:        505
    Message:    The data object "P_S_COMM" has no component called
    Procedure
    If the problem occurred during the data load or transport, activate the transfer rule.
    If the problem persists, search in the SAP note system under 'RSAR 245'
    So please tell the what is the error and how to resolve this issue.
    And pls give the error 'RSAR 245' message from SAP note system.
    I will assign the points
    bye
    rizwan

    hi Rizwan,
    check if helps Note 602318 - RSAR 245 error analysis with RSTMPL80
    Summary
    Symptom
    The process of generating transfer rules or loading data terminates with error RSAR 245.
    The long text for error message RSAR 245 indicates the cause of the generation error.
    1. Error code: 6
    a) Message: The P_S_COMM data object has no component
    This is very likely to be due to an 'obsolete' communication structure.
    b) Message: <table name> is already declared.
    c) Other messages
    2. Error code: 7
    Other terms
    RSAR245, RSAR 245, activate transfer structure, monitor, request
    Solution
    1. Error code: 6
    a) 'Obsolete' communication structure:
                Due to an optimization algorithm, pressing the 'Generating button' does not have any affect on the communication structure. Therefore, use the following workaround:
    Select 'Communication structure' -> 'Change'.
    Add any InfoObject (for example, '0MATERIAL') to the communication structure -> 'ENTER'.
    Communication structure ---> 'Save'
    Selet the InfoObject you inserted and delete it.
    Communication structure ---> 'Save'
    Communication structure -> 'Generate'.
    a) Message: <table name> is already declared. Check whether the local part of your routines contains TABLE statements. TABLE statements always have global validity and should therefore be in the global part of the routine.
    b) If the error occurs when data is being loaded, the transfer rule should be generated using RSA1.
    In many cases, generating transfer rules using RSA1 results in a detailed error message.
    2. Error code:
               Program to be generated is locked by parallel process.
    This error can sporadically occur if parallel processes simultaneously try to generate the same program.
    Unfortunately, you cannot completely prevent the error from occurring since these types of lock situations can still crop up during data loading in parallel processes. We are currently working on improving system stability relating to lock situations during the load task.
    Reloading usually works without any problems.

  • What is the best data structure for loading an enterprise Power BI site?

    Hi folks, I'd sure appreciate some help here!
    I'm a kinda old-fashioned gal and a bit of a traditionalist, building enterprise data warehouses out of Analysis Service hypercubes with a whole raft of MDX for analytics.  Those puppies would sit up and beg when you asked them to deliver up goodies
    to SSRS or PowerView.
    But Power BI is a whole new game for me.  
    Should I be exposing each dimension and fact table in the relational data warehouse as a single Odata feed?  
    Should I be running Data Management Gateway and exposing each table in my RDW individually?
    Should I be flattening my stars and snowflakes and creating a very wide First Normal Form dataset with everything relating to each fact? 
    I guess my real question, folks, is what's the optimum way of exposing data to the Power BI cloud?  
    And my subsidiary question is this:  am I right in saying that all the data management, validation, cleansing, and regular ETTL processes are still required
    before the data is suitable to expose to Power BI?  
    Or, to put it another way, is it not the case that you need to have a clean and properly structured data warehouse
    before the data is ready to be massaged and presented by Power BI? 
    I'd sure value your thoughts and opinions,
    Cheers, Donna
    Donna Kelly

    Dear All,
    My original question was: 
    what's the optimum way of exposing data to the Power BI cloud?
    Having spent the last month faffing about with Power BI – and reading about many people’s experiences using it – I think I can offer a few preliminary conclusions.
    Before I do that, though, let me summarise a few points:
    Melissa said “My initial thoughts:  I would expose each dim & fact as a separate OData feed” and went on to say “one of the hardest things . . . is
    the data modeling piece . . . I think we should try to expose the data in a way that'll help usability . . . which wouldn't be a wide, flat table ”.
    Greg said “data modeling is not a good thing to expose end users to . . . we've had better luck with is building out the data model, and teaching the users
    how to combine pre-built elements”
    I had commented “. . . end users and data modelling don't mix . . . self-service so
    far has been mostly a bust”.
    Here at Redwing, we give out a short White Paper on Business Intelligence Reporting.  It goes to clients and anyone else who wants one.  The heart
    of the Paper is the Reporting Pyramid, which states:  Business intelligence is all about the creation and delivery of actionable intelligence to the right audience at the right time
    For most of the audience, that means Corporate BI: pre-built reports delivered on a schedule.
    For most of the remaining audience, that means parameterised, drillable, and sliceable reporting available via the web, running the gamut from the dashboard to the details, available on
    demand.
    For the relatively few business analysts, that means the ability for business users to create their own semi-customised visual reports when required, to serve
    their audiences.
    For the very few high-power users, that means the ability to interrogate the data warehouse directly, extract the required data, and construct data mining models, spreadsheets and other
    intricate analyses as needed.
    On the subject of self-service, the Redwing view says:  Although many vendors want tot sell self-service reporting tools to the enterprise, the facts of the matter are these:
    v
    80%+ of all enterprise reporting requirement is satisfied by corporate BI . . . if it’s done right.
    v Very few staff members have the time, skills, or inclination to learn and employ self-service business intelligence in the course of their activities.
    I cannot just expose raw data and tell everyone to get on with it.  That way lies madness!
    I think that clean and well-structured data is a prerequisite for delivering business intelligence. 
    Assuming that data is properly integrated, historically accurate and non-volatile as well, then I've just described
    a data warehouse, which is the physical expression of the dimensional model.
    Therefore, exposing the presentation layer of the data warehouse is – in my opinion – the appropriate interface for self-service business intelligence.
    Of course, we can choose to expose perspectives as well, which is functionally identical to building and exposing subject data marts.
    That way, all calculations, KPIs, definitions, and even field names, and all consistent because they all come from the single source of the truth, and not from spreadmart hell.
    So my conclusion is that exposing the presentation layer of the properly modelled data warehouse is – in general - the way to expose data for self-service.
    That’s fine for the general case, but what about Power BI?  Well, it’s important to distinguish between new capabilities in Excel, and the ones in Office 365.
    I think that to all intents and purposes, we’re talking about exposing data through the Data Management Gateway and reading it via Power Query.
    The question boils down to what data structures should go down that pipe. 
    According to
    Create a Data Source and Enable OData Feed in Power BI Admin Center, the possibilities are tables and views.  I guess I could have repeating data in there, so it could be a flattened structure of the kind Melissa doesn’t like (and neither do I). 
    I could expose all the dims and all the facts . . . but that would mean essentially re-building the DW in the PowerPivot DM, and that would be just plain stoopid.  I mean, not a toy system, but a real one with scores of facts and maybe hundreds of dimensions?
    Fact is, I cannot for the life of me see what advantages DMG/PQ
    has over just telling corporate users to go directly to the Cube Perspective they want, that has already all the right calcs, KPIs, security, analytics, field names . . . and most importantly, is already modelled correctly!
    If I’m a real Power User, then I can use PQ on my desktop to pull mashup data from the world, along with all my on-prem data through my exposed Cube presentation layer, and PowerPivot the
    heck out of that to produce all the reporting I’d ever want.  It'd be a zillion times faster reading the data directly from the Cube instead of via the DMG, as well (I think Power BI performance sucks, actually).
    Of course, your enterprise might not
    have a DW, just a heterogeneous mass of dirty unstructured data.  If that’s the case,
    choosing Power BI data structures is the least of your problems!  :-)
    Cheers, Donna
    Donna Kelly

Maybe you are looking for