Re-post: Question regarding Pricing

Greetings,
A friendly reminder that we need an answer:
I want to make sure we understand the pricing and charging correctly.
The information I have is from http://forums.adobe.com/servlet/JiveServlet/download/2099-501587-22922 67-22393/LCCS_Pricing_FAQ.pdf
And the reports from https://cocomo.acrobat.com/
Is this the current information?
Regarding user-minutes are the all the time the user is in the room or actual active video?
Thanks,
Zeev

Check Tax Code in T.code: FTXP
Ideally, this error might also be possible as we require to maintain Tax Code  for Tax Condition Type, which might have been missed by over sight when maintaining Condition Record, as it is not a mandaory field.
Also check the same with FI consultants.
Hope the above information is helpful.
Regards,
Rajesh Banka

Similar Messages

  • What is the best place to post question regarding the mac app store?

    Hi
    I am creating a game that i want to publish it in the mac app store after it is done, but i don't know what are the steps of registering developers at Apple.com, how to pay, send the game to approval and getting paid. ( I am not from USA, and there is no store for my country ).
    Where can i post my questions and get the right answers?

    Hi imacyh!
    This Distribute Mac apps on the Mac App Store might be a good place to start.
    Also check the Developer Forum.
    ali b

  • Question Regarding Mesh with 3702 and non AC ap´s

    Hello! 
    quick question regarding MESH deployments with 2 different sorts of AP´s: AC and non-AC modells: If my 3702i is my root AP´s, and 3602i my MAP - will AC still work in 80Mhz, or will I have to switch to 40mhz (and thus crippling (???) AC performance?) 
    Not 100% sure on this... I *think* it should still work for the normal 802.11n connection, but I´m not sure if the 80mhz channel width (needed??) for AC, will cause the non-ac 3602i to be stranded? 
    Thanks alot for your insight! 

    Currently, my network DHCP server is a software based DHCP server. In reading over your post if I understood correctly it sounds like the managed switch would have its own hardware based DHCP server to assign IP addresses to those clients identified on the "external" VLAN. Did I understand that correctly or did misread something?
    DHCP server will be software based, even though you defined it on your switch, it is DHCP service running on its OS.
    I am configuring this setup for a small business application and will need to purchase a managed switch with 16 or 24 ports. Do you have any recommendations on a particular managed switch that will handle the VLAN configuration and include POE while keeping costs in mind.
    In this forum, most of us discussed about Cisco enterprise grade wireless. Here is 2960X series switch detail, if you are interested
    http://www.cisco.com/c/en/us/products/switches/catalyst-2960-x-series-switches/index.html
    You may need to check the pricing with your Cisco account manager or from a Cisco partner.
    HTH
    Rasika
    **** Pls rate all useful responses ****

  • Where do i find daily posted question on sap abap and sap webdynpro abap

    Hi
    where do we find Daily posted questions on sap abap and sap webdynpro abap in scn sap  so that i can go through the questions and answer them .

    Hi,
    Go to the Content tab of any space and click on discussions. Then you can sort them by date created or any other
    For ex: This link for WDA discussions: - Web Dynpro ABAP
    You can also click on Receive email notifications for any space to get updates on that space.
    hope this helps,
    Regards,
    Kiran

  • One question about Pricing and Conditions puzzle me for a long time!

    One question about Pricing and Conditions puzzle me for a long time.I take one example to explain my question:
    1-First,my sale order use pricing procedure RVAA01.
    2-Next,the pricing procedure RVAA01 have some condition type,such as EK01(Actual Costs),PR00(Price)....,and so on.
    3-Next,the condition type PR00 define the Access Sequences PR00 as it's Access Sequences.
    4-Next,the Access Sequences PR00 have some Condition tables,such as:
         table 118 : "Empties" Prices (Material-Dependent)
         table 5 : Customer/Material
         table 6 : Price List Type/Currency/Material
         table 4 : Material
    5-Next,I need to maintain Condition tables's Records.Such as the table 5(Customer/Material).I guess the sap would supply one screen for me to input the data of table 5.At this screen,the sap would ask me to select one table,such as table 5.When I select the table 5,the sap would go to the screen to let me input the data of table 5.But when I use the T-CODE VK31 or VK32 to maintain Condition tables's Record,I found it's total different from my guess:
    A-First,I can not found one place for me to open the table,such as table 5,to let me input the data?
    B-Second,For example,when I select the VK31->Discounts/Surcharges->By Customer/Material,the sap show the grid view at the right side.At the each line of the grid view,you need to select the Condition Type at the first field.And this make me confused very much.Why the sap need me to select one Condition Type but not the Condition table?To the normal logic,it ought not to select Condition table but not the Condition Type!
    Dear all,I'm a new one in sd.May be this is a very stupid question.But it did puzzle me for a long time.If any one can  explain this question in detail and let me understand the concept,I will appreciate him/her very much.Thank you.

    Hi,
    You said that you are using the T.codes VK31 or VK32.
    These transaction codes are used to enter condition records for standard condition types. As you can see a grid left side having all the standard condition types like price, discounts, taxes, frieghts.
    Pl check using T.code VK11 OR VK12 (change mode)
    Here you can enter the required condition type, in the intial screen. (like PR00, MWST, K004, K005 .....etc)
    After giving the condition type, press enter or click on Combinations icon on top of the screen. Then you can see all the condition tables which you maintained for that condition type. Like as you said table 118, table 5, table 6 and table 4.
    You can select any table and press enter, then you can go into the screen in which you have all the field cataglogues you maintained for that table. For example you selected combination of Customer/Material (table 5) then after you press enter then you can see customer field on top, and material fields.
    You can give all the required values and save the conditon record.
    Hope this is clear.
    REWARD IF HELPFUL.
    Regards,
    praveen

  • Question regarding ASO application in Essbase 11 version

    Hi All,
    Thanks for the replies to my previous posts.
    I have a question regarding the ASO applications for telecom company built in Essbase 11 version. Please provide your feedback on the design.
    The ASO application has the following number of Dimensions:
    Dimensions     Number of Levels     Number of Level 0 Members     Number of Attribute Dimensions
    Dimension1     2     6.5 million     15
    Dimension2 1     3     
    Dimension3     1     4     
    Dimension4 1      6     
    Dimension5 1     6     
    Dimension6     1     5     
    Dimension7 1     3     
    Dimension8     5     1700     
    Dimension9     2     800     
    Dimension10 2     40000     
    Dimension11 3     750     
    Dimension12 2     34000     
    Dimension13 1      15     
    The number of Measures is 8.
    The outline size is around 2.12 GB.
    The data is mostly sparse. Does this design yield a good performance. Should I change some of the attributes to UDAs to increase the performance.I think Attribute dimensions are more flexible than UDAs but it affects the performance of the retrieval.
    Thanks in advance.
    Kannan.

    In ASO attribute dimensions are treated like regular dimensions, That is to say they are materalized just like a regular dimension. Changing them to UDA's won't buy you the same performance as having them as dimensions will. The one nice thing about Attributes in ASO cubes (and BSO cubes) is you don't clutter us the screen with dimensions that are not used a lot. If your attrubite dimensions are used often in your ASO cube, there would be no performance difference if you made them regular dimensions.

  • I have a question regarding emails being marked as unread when syncing with an exchange account.

    I have a question regarding emails being marked as unread when syncing with an exchange account.
    In the evening I can see I have 10 unread emails on my exchange account. I leave them unread. The following day, at work, I view and read all my mails from today and yesterday, so that I have NO unread messages. When I then sync my iphone it marks only the mails from today as read but, leaves all the mails from yesterday marked unread. Only solution so far is for me to go through the mails on my iphone one at a time for them to be marked as read.
    My mail account is set to sync 1 month back.
    I have had this problem on all the iphones I have had.
    I currently have an iPhone 5 and my software is up to date.
    What am I doing wrong?

    Hey kabsl,
    Thanks for the post.
    What type of email account is attached to your BlackBerry Z10? (POP3, IMAP, Exchange).  Also have you tried removing and re-adding the email account to test?
    Is this email account only setup on one computer or several computer?
    I look forward to your reply.
    Cheers.
    -ViciousFerret
    Come follow your BlackBerry Technical Team on Twitter! @BlackBerryHelp
    Be sure to click Like! for those who have helped you.
    Click  Accept as Solution for posts that have solved your issue(s)!

  • Thanks for the reply to my question regarding sound in iMovie and sending me the online links.  however the online links are impossible to play as they upload too slowly with the result that the playing keeps stopping waiting for the content to catch up

    Thanks for your reply to my question regarding sound in iMovie.  However I cannot follow the links you sent as they upload too slowly and therefore the instructional movie keeps stopping waiting for the content toload.  Surely theremust be a solution for this??
    Thanks in advance
    lolly

    Please continue posting in your original thread.
    With the amount of traffic on these forums it is impossible to chase around finding this.

  • Questions regarding *dump_dest parameters and fast_recovery_area

    Hello,
    I just installed a fresh new 11.2.0.2 Database on Solaris 10.
    Everything straightforward on the parameter side!!! I tried custom install as well as general purpose template. When installing with DBCA, I set every parameters around DB Name in lowercase name.
    With this, questions are popping in my mind regarding some parameters after installation.
    First, %dump_dest parameters contains in path, two times the db name (ocpdb in my case):
    background_dump_dest       /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
    user_dump_dest                 /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
    core_dump_dest                 /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/cdumpIs it normal to have ..../rdbms/dbname/dbname/..... as path, with dbname/dbname ??? Why?
    Second, the question regarding the directory structure under fast_recovery_area (new term for flash_recovery_area). The directory structure:
    oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l
    total 2
    drwxr-x--- 2 oracle oinstall 512 2010-10-28 19:53 ocpdb
    drwxr----- 5 oracle oinstall 512 2010-10-29 07:44 OCPDB
    oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l ocpdb
    total 9528
    -rw-r----- 1 oracle oinstall 9748480 2010-10-31 21:09 control02.ctl
    oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l OCPDB/
    total 3
    drwxr----- 5 oracle oinstall 512 2010-10-31 03:48 archivelog
    drwxr----- 3 oracle oinstall 512 2010-10-29 07:44 autobackup
    drwxr----- 3 oracle oinstall 512 2010-10-29 07:43 backupsetWhy am I having a subdir with dbname in uppercase AND in lowercase? Should I specify dbname in uppercase at db creation to have all files under the same directory, or in lowercase? Or, is it normal?
    I want to know how to do it well before reinstalling a fresh database.
    Thanks
    Bruno
    Edited by: blavoie on Oct 31, 2010 6:18 PM
    Edited by: blavoie on Oct 31, 2010 6:20 PM

    Hi,
    I just reinstalled all from scratch, everything in lowercase as well in environment variables and dbname in dbca:
    oracle@enalab13:~$ echo $ORACLE_SID
    ocpdbFast recovery area directories, dates prove that it's my fresh install:
    oracle@enalab13:/u01/app/oracle$ ll fast_recovery_area/
    total 2
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 ocpdb
    drwxr-x--- 4 oracle oinstall 512 2010-11-02 11:24 OCPDB
    oracle@enalab13:/u01/app/oracle$ ll -R fast_recovery_area/
    fast_recovery_area/:
    total 2
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 ocpdb
    drwxr-x--- 4 oracle oinstall 512 2010-11-02 11:24 OCPDB
    fast_recovery_area/ocpdb:
    total 9528
    -rw-r----- 1 oracle oinstall 9748480 2010-11-02 11:34 control02.ctl
    fast_recovery_area/OCPDB:
    total 2
    drwxr-x--- 3 oracle oinstall 512 2010-11-02 11:24 archivelog
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 onlinelog
    fast_recovery_area/OCPDB/archivelog:
    total 1
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:24 2010_11_02
    fast_recovery_area/OCPDB/archivelog/2010_11_02:
    total 47032
    -rw-r----- 1 oracle oinstall 48123392 2010-11-02 11:24 o1_mf_1_5_6f0c9pnh_.arc
    fast_recovery_area/OCPDB/onlinelog:
    total 0Some interresting output asked earlier in post:
    SQL> archive log list
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     4
    Next log sequence to archive   6
    Current log sequence           6
    SQL> show parameter recovery
    NAME                                 TYPE        VALUE
    db_recovery_file_dest                string      /u01/app/oracle/fast_recovery_area
    db_recovery_file_dest_size           big integer 4032M
    recovery_parallelism                 integer     0
    SQL> show parameter control_files
    NAME                                 TYPE        VALUE
    control_files                        string      /u01/app/oracle/oradata/ocpdb/control01.ctl,
                                                         /u01/app/oracle/fast_recovery_area/ocpdb/control02.ctl
    SQL> show parameter instance_name
    NAME                                 TYPE        VALUE
    instance_name                        string      ocpdb
    SQL> show parameter db_name
    NAME                                 TYPE        VALUE
    db_name                              string      ocpdb
    SQL> show parameter log_archive_dest_1
    NAME                                 TYPE        VALUE
    log_archive_dest_1                   string
    log_archive_dest_10                  string
    log_archive_dest_11                  string
    log_archive_dest_12                  string
    log_archive_dest_13                  string
    log_archive_dest_14                  string
    log_archive_dest_15                  string
    log_archive_dest_16                  string
    log_archive_dest_17                  string
    log_archive_dest_18                  string
    log_archive_dest_19                  string
    SQL> show parameter %dump_dest 
    NAME                                 TYPE        VALUE
    background_dump_dest                 string      /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
    core_dump_dest                       string      /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/cdump
    user_dump_dest                       string      /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/traceI think, next time, I'll install everything regarding oracle SID in upper case...
    Maybe it's details that I don't need to care about... I seems that something is happening bad with the management of fast_recovery_area...
    Thanks
    Bruno

  • My questions regards Camera Raw

    I am posting this to both the PhotoShop and Lightroom forums. I use PhotoShop CS4 and Lightroom 1.1 on a Windows XP OS.
    My question regards Camera Raw.
    Question 1: If I open a RAW file in PhotoShop will the properties remain if I later open it in Lightroom, and vise-versa. Previously I had used Canon’s DPP and once I opened files in PhotoShop it was at square one. I concluded this was due to different manufacturers. I can adjust to that. Now that I am staying with just Adobe products I want to make sure I don’t have to do everything twice.
    Question 2: Using Bridge I have opened some jpg’s in Camera Raw and manipulated them. It seemed to work better if I clicked “Done” rather than Save Image (if I need to save it in another directory).
    Question 3: If I am working in PhotoShop and would either like to open a Raw file or manipulate a jpg in Camera Raw, is it possible to open Camera Raw without having to use Bridge?
    Thanks in advance for any assistance.

    rollsnut wrote:
    I am posting this to both the PhotoShop and Lightroom forums.
    You would have been better off asking on the Camera Raw Forum, not Photoshop (since you posted it only to the Windows side).
    As for your questions, you can indeed coordinate settings to and from Camera Raw and Lightroom but it's a matter of understanding how metadata editing works...In Camera Raw/Bridge, settings are saved in the file or in a side car file...in Lightroom settings are saved in the Lightroom catalog database, not the file or side car UNLESS you specifically instruct Lightroom to read or write the settings to or from the file.
    So, what Camera Raw does to a file will need to be read FROM the file inside of Lightroom. Just make no mistake that once the image in actually processed and opened inside of Photoshop, it's no longer a raw file but a processed file and anything you do to it afterwards in Photoshop will not be in the original raw file.
    The other questions aren't really Lightroom questions and would be better of posted in the Camera Raw forum...

  • Bw Question :regarding the versioning

    Hii All,
    I did post a question regarding to the versionining of the cube on friday 9th May and still i did not get any reply on that.Plz let me know.otherwise plz keep me posted that you are unable to reply to my question.
    My Question was :
    In the Versioning of the cube ,we give that version a particular name and select the value type of it as : 110 or 130 or 140. what is this value type .. ? what does this 110,130 or 140 really mean ?
    Why we need this value type.? and can we get some documents to read and explore this value type ?. Plz help.
    Thanks & regards ,
    Madhavi S Bichakal

    Hi Madhavi,
    Basically in BW you'll find two characteristics used for versioning:
    - Version: Used to create different versions of the information
    - Value type: used to indicate what the information means.
    Examples:
    Version 000 is usually Plan/Actual data (the final version). Then, for version 000 you will have different value types, like 010 = Actual, 020 = Plan, 030 = Target, etc..
    Then you can have different versions (001, 002, 003) that are used in the planning process. You start with version 001, then you can move to 002, 003,... and when you have the final Plan, you move to 000.
    That's the usual usage of version / value type.
    but, you can use it as you want. The only problem that you can have is that if you rename the description of a value type, and then you activate a BCT that generates data for that value type and the description will be incorrect.
    From what you said, you are using values from 100 and above, SAP uses up to 90 from what i've seen, so you won't have any problems.
    Hope this clarifies.
    Regards,
    Diego

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • FI-GL: Question regarding "alternative account no." - Why in BSEG?

    Hi all,
    I have another question. I think this is really a little bit tricky this time (I spend a lot of time investigating this question but couldn't find an answer).
    It's regarding the field "alternative account no." in FS00 (table SKB1-ALTKT) and it's about the design of the SAP system regarding this feature (alternative chart of account).
    We've one company code (Belgium) in the system which uses alternative account numbers for a country specific local chart of accounts. The country specific chart of accounts BE01 is assigned to this company code in OBY6 besides the operative chart of accounts. The company code is in production for some years so there are many postings up to now. So far so good. Now, they have found an error in the assignment from alternative account to operative account. As a result, they want us to evaluate the option to change the alternative account number for this account in the transaction FS00.
    For sure, it's not possible to change the alternative account no. in FS00 as long as there is a balance on this account. But if you post this balance to a temporary / technical account, it's possible to change the alternative account no. If you do this, SAP will give you the message FH 165 which is a warning and not a error message (so you can save the changes). After that, it's possible to create an inverse posting in order to get the balance back to this account.
    Now to the strange part (for me): Why does SAP record this alternative account no. for each document line item in the BSEG table in the field BSEG-LOKKT? This is also what the message FH 165 is about. For me, this does not really make sense, but I'm sure that I miss a detail somewhere.
    I mean, you know for example that the alternative account A belongs to the operative account B (via FS00 / SKB1-ALTKT). Therefore, why do you need to write this account to every single line item in BSEG? Why doesn't SAP just substitute the operative account no. with the alternative account no. in all relevant reports (RFBILA00, balance display S_ALR_87012277...).
    The background of my question is now: If I zero out the balance and change the alternative account number in FS00, then all postings up to now won't be changed automatically. So for all postings up to now, the old alternative account no. remains in the BSEG table. For all new postings, the new alternative account no will be in the BSEG table. So from my understanding, there will be an inconsistency in the database if I change the alternative account no.
    In order to evaluate whether I can change the alternative account no. without risking inconsistencies, I would now need to know how this field (BSEG-LOKKT) is used in the SAP system. Is it used in any special reports or for what purpose is it in the BSEG table? What about the balance table GLT0? Is there also a special balance table for the alternative account no. in the system or how are the balances (e.g. for RFBILA00) calculated for the alternative chart of accounts?
    I would be very glad for any help as I am really at the end with my SAP knowledge on this point.
    Thank you in advance and sorry for the long (and maybe confusing?) posting.
    Regards,
    Peter

    hi Peter,
    I believe the system is perfectly designed in this case
    Let's say you have G/L account A in Operative CoA, which is linked to account 1 in Alternative CoA. Than the local law changes and you have to link account A to account 2 from 01.01.2008. The system works perfectly: All the items which were posted earlier are still shown on Alternative account 1 (according to local law for last year), while the new items will be shown on account 2 (according to local law for the new year).
    BSEG-LOKKT is only used for reporting, does not control anything. On the other hand there won't be any inconsistency in your system, if you change the alternative account number acc. to business needs.
    hope this helps
    ec

  • Question regarding pse 8

    When I have multiple pictures up, how do I keep two pictures from merging when I move one of the pictures to the side?  I would like to turn this feature off.
    Thanks.

    I opened 4 photos into Adobe Photoshop Elements 8 and have set the pictures to "float in all windows" so I can easily grab any one of them and move them to compare pictures.What happens is that the picture I am moving fades and suddenly I have two pictures, one on top of the other in one frame.  In the bar at the top of the "merged" picture, it shows the file names of two pictures.  From that top bar, I can undo the "merge" by grabbing one of the pictures and dragging it to the left or right, and I am back to the two original pictures.  I would like to be able to move pictures around without two frames becoming one frame with two pictures layered.  Hope that is a better explanation. Thanks. DianeDate: Mon, 29 Oct 2012 12:47:56 -0600
    From: [email protected]
    To: [email protected]
    Subject: Question regarding pse 8
        Re: Question regarding pse 8
        created by hatstead in Photoshop Elements - View the full discussion
    I don't understand. Multiple pictures up - what does that mean?2 pictures merging when you move one to the side - how do you make them merge and move to the side?Feature off - when does this feature come on?
         Please note that the Adobe Forums do not accept email attachments. If you want to embed a screen image in your message please visit the thread in the forum to embed the image at http://forums.adobe.com/message/4808688#4808688
         Replies to this message go to everyone subscribed to this thread, not directly to the person who posted the message. To post a reply, either reply to this email or visit the message page: http://forums.adobe.com/message/4808688#4808688
         To unsubscribe from this thread, please visit the message page at http://forums.adobe.com/message/4808688#4808688. In the Actions box on the right, click the Stop Email Notifications link.
         Start a new discussion in Photoshop Elements by email or at Adobe Community
      For more information about maintaining your forum email notifications please go to http://forums.adobe.com/message/2936746#2936746.

  • Questions regarding creating the database

    Hi there,
    From the previous posting, http://forum.java.sun.com/thread.jspa?threadID=640415&tstart=15 someone gave me the "formula" of connecting to the database:
    java.sql.Connection  conn   =  java.sql.DriverManager.getConnection("jdbc:mysql://localhost/name_of_DB","user","password") Now just couple of questions regarding the formula :
    1) Obviously, if I want the name of my DB, then I will have to create my DB. Can somebody please tell me the protocol of creating the DB? And where do I create this DB (i.e can I create it anywhere in my application)? Or is it that I have to create a new database using MySQL itself?
    2) After creating a database, I would like to create multiple tables containing different datas. Is it possible to place the code creating these tables anywher in the application I want?
    Your ideas or advice would be much appreciated. Thank you in advance.
    Regards,
    Young

    1) Yes, you'll have to create the database using MySQL.
    2) You sure can once you have the database created with the proper rights assigned to your user. You can put the code anywhere you want but you may want to put it somewhere where it only ran once like on install if you're doing a standalone app.

Maybe you are looking for

  • How to get current row data in table control

    Hi , expert ,    I am professional in oracle ,  but  now I am a new guy in SAP ABAP . I  have a question in UI How to get current row data and click pushbutton  in table control  to open next screen ? I want to get the current data and open next scre

  • Intel Mac OS X can't connect using 802.1x with TTLS authentication

    To login at the wireless network on my school I use the following settings: 802.1x connection with TTLS authentication and TTLS inner authentication set to PAP. My MacBook Pro logs in, but has a self assigned ip-address and I can't use the network. O

  • ORA-00942 when creating external table under 10g with AQ

    I have an application that runs with AQ. The front-end queues up batch-type tasks in a queue. When one particular kind of message is dequeued, it initiates a data load. That load requires the creation of an external table. The schema in question is o

  • Mapping Question p3: .. etc.. namespace prefixes

    Does anyone know why xi uses namespace prefixes like p3: p2: etc..?  I am trying to map based on an rfc as the target and a message from a wsdl as the source.  I can not test the map because it expects the test file ( from our partner ) to have all t

  • What is the alternate for OLE object in 10g

    Hi all I am migrating forms 6i to 10g... They are some ole objects in the forms. while migrating using form migration tool.... it is giving a message that OLE built in's are obsolete... What i need to do in 10g to replace OLE's functionality???? Plea