Question regarding Collections Library Usage/Behaviour

Hi All,
I am trying to write a dynamic JSON formatter based on my business logic. I am trying something like this:
Code:
JSONArray testArray= new JSONArray();
JSONObject obj1 = new JSONObject();
obj1.put("TestKey","Test Value");
testArray.add(obj1);This works perfectly with the JSONObject showing up in the JSONArray.
However, when I try an anonymous class or simultanous JSONObject.put() inside JSONArray. The size of the array is updated but not the ENTRY i.e. I cant see any Key Value pair updated inside the array.
Code:
import org.json.simple.JSONObject;
import org.json.simple.JSONValue;
import org.json.simple.JSONArray;
JSONArray testArray= new JSONArray();
JSONObject obj1 = new JSONObject();
testArray.add(obj1.put("TestKey","TestValue"));
[OR]
testArray.add(new JSONObject().put("TestKey","TestValue"));In follow up with them same discussion, when I try the following snippet:
public static void main(String[] args)
       Set<HashMap<String,String>> testObj = new HashSet<HashMap<String,String>>();
        test.add((HashMap)(new HashMap().put("test","value")));       
    }I still can't see any HshMap object created in testObj. I want to know why is it failing this way?
Thank you all in advance.
Note: Please excuse me if this post is not allowed here. I am posting it here as the API for JSON formatting i.e.JSONArray and JSONObject classes are straightforward extensions to ArrayList and HashMap.
Harsh
Edited by: codeNombre on Aug 26, 2010 6:23 PM

test.add((HashMap)(new HashMap().put("test","value")));        You're making an incorrect assumption about what HashMap.put() returns. Check it.

Similar Messages

  • Large storage question regarding Itunes Library Management

    HI
    I am running Itunes 7.4 and have a 500gb drive physically attached via firewire 800. I currently have my itunes preferences set to copy files to the library when added. My question is. Can I only have itunes physically manage and store my music on the 500gb drive while keeping my video on another drive and have itunes just maintain a reference. In other words I want to have multiple physical either managed or referenced by the itunes library.
    How are you all dealing with this as your video and music libraries continue to get larger and larger?
    thanks
    jim

    Create a second iTunes Library. Quit itunes - hold down the option key when relaunching iTunes and follow the directions offered.
    MJ

  • Regarding COLLECT stmt usage in an ABAP Program.

    Hi All,
    Could anyone please explain if the COLLECT statement really hampers the performance of the program, to a large extent.
    If it is so, please explain how the performance can be improved with out using the same.
    Thanks & Regards,
    Goutham.

    COLLECT allows you to create unique or summarized datasets. The system first tries to find a table entry corresponding to the table key. (See also Defining Keys for Internal Tables). The key values are taken either from the header line of the internal table itab, or from the explicitly-specified work area wa. The line type of itab must be flat - that is, it cannot itself contain any internal tables. All the components that do not belong to the key must be numeric types ( ABAP Numeric Types).
    Notes
    COLLECT allows you to create a unique or summarized dataset, and you should only use it when this is necessary. If neither of these characteristics are required, or where the nature of the table in the application means that it is impossible for duplicate entries to occur, you should use INSERT [wa INTO] TABLE itab instead of COLLECT. If you do need the table to be unique or summarized, COLLECT is the most efficient way to achieve it.
    If you use COLLECT with a work area, the work area must be compatible with the line type of the internal table.
    If you edit a standard table using COLLECT, you should only use the COLLECT or MODIFY ... TRANSPORTING f1 f2 ... statements (where none of f1, f2, ... may be in the key). Only then can you be sure that:
    -The internal table actually is unique or summarized
    -COLLECT runs efficiently. The check whether the dataset
    already contains an entry with the same key has a constant
    search time (hash procedure).
    If you use any other table modification statements, the check for entries in the dataset with the same key can only run using a linear search (and will accordingly take longer). You can use the function module ABL_TABLE_HASH_STATE to test whether the COLLECT has a constant or linear search time for a given standard table.
    Example
    Summarized sales figures by company:
    TYPES: BEGIN OF COMPANY,
            NAME(20) TYPE C,
            SALES    TYPE I,
          END OF COMPANY.
    DATA: COMP    TYPE COMPANY,
          COMPTAB TYPE HASHED TABLE OF COMPANY
                                    WITH UNIQUE KEY NAME.
    COMP-NAME = 'Duck'.  COMP-SALES = 10. COLLECT COMP INTO COMPTAB.
    COMP-NAME = 'Tiger'. COMP-SALES = 20. COLLECT COMP INTO COMPTAB.
    COMP-NAME = 'Duck'.  COMP-SALES = 30. COLLECT COMP INTO COMPTAB.
    Table COMPTAB now has the following contents:
              NAME    | SALES
              Duck    |   40
              Tiger   |   20

  • Question regarding Collections.binarySearch()

    I was looking at the source of the binarySearch implementation, and while most of it is perfectly clear, I just don't get why the method
    returns -(low 1)+ instead of -low when the desired item has not been found.
    The doc says it should return in this case -(insertionpoint) - 1 but this makes no sense for me.
    Thanks in advance :-).

    Well done figuring it out. It gives a good sense of satisfaction to lead someone to the answer instead of just spoon-feeding like most posters seem to expect!
    BTW: It does also explain that in the Javadoc ;-)
    (And don't forget the dukes...)

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Question regarding roaming and data usage

    I am currently out of my main country of service, and as such I have a question regarding roaming and data usage.
    I am told that the airplane mode is sufficient from keeping the phone off from roaming, but does this apply to any background data usage for applications and such?
    If the phone is in airplane mode, are all use of the phone including wifi and application use through the wifi outside of all extra charges from roaming?

    Ann154 wrote:
    If you are getting charged to use the wifi, then it is possible.  Otherwise no
    Just to elaborate here, Ann154 is referring to access charges for wifi, which is nothing to do with Verizon, so if you are using it in a plane, hotel, an internet cafe etc that charges for Wifi rather than being free .   Verizon does not charge you (or indeed know about!) wifi usage, or any other usage that is not on their cellular network (such as using a foreign SIM for example in global phones)  So these charges, if any, will not show up on the verizon bill app.  Having it in airplane mode prevents all cellular data traffic so you should be fine

  • Wiget question regarding system usage

    I'm a recent convert from the PC world and am finding the dashboard feature very useful. I did, however have a question regarding the way wigets in the dashboard access the iMac's system resources.
    Specifically, I was wondering if a wiget is installed and appears in the "Manage Wigets" list but is NOT active on the dashboard, does that wiget still utilize the system's resources? (i.e. is it still actively updating its information or performing its task) Or does it "sleep" until you actively enable it in the dashboard?

    Welcome to Discussions!
    I believe widgets don't consume resources unless you have them open in Dashboard-just having them installed, but not open wouldn't consume resources.
    Since you're new to mac, you may want to check out Mac 101 and Switch 101.
    Message was edited by: joshz

  • Questions regarding customisation/configuration of PS CS4

    Hello
    I have accumulated a list of questions regarding customising certain things in Photoshop. I don't know if these things are doable and if so, how.
    Can I make it so that the list of blending options for a layer is by default collapsed when you first apply any options?
    Can I make it possible to move the canvas even though I'm not zoomed in enough to only have parts of it visible on my screen?
    Is it possible to enable a canvas rotate shortcut, similar to the way you can Alt+RightClick to quickly change brush size?
    Is it possible to lock button positions? Sometimes I accidentally drag them around when I meant to click.
    Is it possible to lock panel sizes? For example, if I have the Navigator and the Layers panels vertically in the same group, can I lock the height of the navigator so that I don't have to re-adjust it all the time? Many panels have a minimum height so I guess what I am asking for is if it's possible to set a maximum height as well.
    Is it possible to disable Photoshop from automatically appending "copy" at the end of layer/folder names when I duplicate them?
    These are things I'd really like to change to my liking as they are problems I run into on a daily basis.
    I hope someone can provide some nice solutions

    NyanPrime wrote:
    <answered above>
    Can I make it possible to move the canvas even though I'm not zoomed in enough to only have parts of it visible on my screen?
    Is it possible to enable a canvas rotate shortcut, similar to the way you can Alt+RightClick to quickly change brush size?
    Is it possible to lock button positions? Sometimes I accidentally drag them around when I meant to click.
    Is it possible to lock panel sizes? For example, if I have the Navigator and the Layers panels vertically in the same group, can I lock the height of the navigator so that I don't have to re-adjust it all the time? Many panels have a minimum height so I guess what I am asking for is if it's possible to set a maximum height as well.
    Is it possible to disable Photoshop from automatically appending "copy" at the end of layer/folder names when I duplicate them?
    These are things I'd really like to change to my liking as they are problems I run into on a daily basis.
    I hope someone can provide some nice solutions
    2.  No.  It's a sore spot that got some forum time when Photoshop CS4 was first released, then again with CS5.  It's said that the rules change slightly when using full-screen mode, though I personally haven't tried it.
    3.  Not sure, since I haven't tried it.  However, you may want to explore the Edit - Keyboard Shortcuts... menu, if you haven't already.
    4.  What buttons are you talking about?  Those you are creating in your document?  If so, choose the layer you want to lock in the LAYERS panel, then look at the little buttons just above the listing of the layers:
    5.  There are many, many options for positioning and sizing panels.  Most start with making a panel visible, then dragging it somewhere by its little tab.  One of the important features is that you can save your preferred layout as a named workspace.  Choose the Window - Workspace - New Workspace... to create a new named workspace (or to update one you've already created).  The name of that menu is a little confusing.  Once you have created your workspace, if something gets out of place, choose Window - Workspace - Reset YourNamedWorkspace to bring it back to what was saved.
    You'll find that panels like to "stick together", which helps with arranging them outside of the Photoshop main window.
    As an example, I use two monitors, and this is my preferred layout:
    6.  No, it's not possible to affect the layer names Photoshop generates, as far as I know.  I have gotten in the habit of immediately naming them per their usage, so that I don't confuse myself (something that's getting easier and easier to do...).
    Hope this helps!
    -Noel

  • Various questions on SAP XI usage

    Hi XI Experts,
    I have few questions on SAP XI usage :
    Q1-Does SAP XI is unique - client like BW ?
    If multi - clients is possible is there any particular usage recommandations when in design, configuration and production deploiement on PRD SAP XI System ?.
    Q2-is it technically possible to restart a well processed message (with nice black & white flag in SXMB_MONI) ?
    And I know its not a good thing to try to do this...
    Q3-is it technically possible to modify XML payload in message ?
    And I know again its a very bad thing to try to do this...
    Q4-We have to work with xCBL 4.0 definitions. I think i only have to import all XSD definitions as External Definitions in Interface Objects of my repository. Does it right ?
    Great thanks for your(s) response(s).
    Best regards
    Etienne

    Hi,
    >>>>Q2 : few methods....hum hum...
    one of them I described in my weblog:
    /people/michal.krawczyk2/blog/2005/11/09/xi-restarting-successfully-processed-messages
    >>>>Q3 : i 'm in SP16. Do you think it'll be as simple as "Right click on message -> Restart message" ?.
    yes it will be that simple as WE19 for IDOCs in R3:)
    Regards,
    michal
    <a href="/people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions"><b>XI / PI FAQ - Frequently Asked Questions</b></a>

  • Questions regarding how I pay my Line Rental Bill ...

    Hi BT,  I'm currently a BE Broadband customer,  I was a BT Broadband customer back in the days when you had BT Home Hub 1.0 and 8mb packages which i was only getting like 5-6mb, 0.3mb upload and constant speed caps on my connection to 1mb.  You guys didn't have true unlimited usage back then so being a gamer and someone that frequently downloads I changed to BE broadband.  They gave me a steady 11mb connection with 1.3mb upload speed which I've been satisfied with.   But now BE Broadband has been sold off to Sky which I don't think I want to be apart of as I've heard a lot of bad feedback and I really do not want to be forced into paying their line rental if I where to switch to Sky Unlimited.  
    So now I'm considering coming back to BT Broadband,  knowing that you've probably improved over the years.  I hope so.  I know you have this whole BT Infinity going now but my area doesn't support fibre optic broadband and probably never will because I've looked on the BT Openreach website and it says there are no plans for my area to be installed with fibre optics which I would of wanted to have but oh well.  What are the chances of my internet speed being the same as it is now with BE Broadband?   I don't see why it would go any slower.   Are your upload speeds around the same?
    Also I have a question regarding the payment methods of your BT Line rental.  I'm currently already on the BT line rental on the anytime calling plan.   I pay the bill quarterly every 3 months, which I'm comfortable with.  What confuses me is when I select the package to order BT Unlimited + Calls I only have the option to select Monthly Line Rental or Line Rental Saver.   Why am I able to pay my phone every 3 months if those are the only two options?   I'd prefrably want to stick to how I pay my phone bill.   So further more how should I go about ordering BT Unlimited broadband without it interfering with my currrent BT line rental?  I just want BT Broadband unlimited along side my current BT line rental method of pay.  Would I still be eligible for the 6 months free and the gift card.  Is the BT Home Hub 4 that comes with it free also?  Also one more question, I'd need my MAC code from BE Broadband right?  

    This is a community forum.  The people on here are other BT customers.
    The speeds have certainly improved but only if BT's equipment at your exchange supports ADSL2/2+.  If not, you'll get what you had before.  While the speeds have improved, the same can't be said for the customer service.  Upload on ADSL2+ can be up to about 1.2M.  On ADSLMax, it's still capped at 448K.
    BT like to combine broadband and phone on one bill.  So if you get line rental and broadband, they will send you one monthly bill for both.  You might be able to get them separately, but not through the web site.  Try sales on 0800 800 150.

  • Questions regarding Disk I/O

    Hey there, I have some questions regarding disk i/o and I'm fairly new to Java.
    I've got an organized 500MB file and a table like structure (represented by an array) that tells me sections (bytes) within the file. With this I'm currently retrieving blocks of data using the following approach:
    // Assume id is just some arbitary int that represents an identifier.
    String f = "/scratch/torum/collection.jdx";
    int startByte = bytemap[id-1];
    int endByte = bytemap[id];
    try {
              FileInputStream stream = new FileInputStream(f);
              DataInputStream in = new DataInputStream(stream);
                    in.skipBytes(startByte);
              int position = collectionSize - in.available();
              // Keep looping until the end of the block.
              while(position <= endByte) {
                  line  = in.readLine();
                  // some pocessing here
                  String[]entry = line.split(" ");
                  String docid = entry[1];
                  int tf = Integer.parseInt(entry[2]);
                  // update the current position within the file.
                  position = collectionSize - in.available();
       } catch(IOException e) {
              e.printStackTrace();
       }This code does EXACTLY what I want it to do but with one complication. It isn't fast enough. I see that using BufferedReader is the choice after reading:
    http://java.sun.com/developer/technicalArticles/Programming/PerfTuning/
    I would love to use this Class but BufferedReader doesn't have the function, "skipBytes(), which is vital to achieve what I'm trying to do. I'm also aware that I shouldn't really be using the readLine() function of the DataInputStream Class.
    So could anyone suggest improvements to this code?
    Thanks
    null

    Okay I've got some results and turns out DataInputStream is faster...
    EDIT: I was wrong. RandomAccessFile becomes a bit faster according to my test code when the block size to read is large.
    So I guess I could write two routines in my program, RAF for when the block size is larger than an arbitary value and FileInputStream for small blocks.
    Here is the code:
    public void useRandomAccess() {
         String line = "";
         long start = 1385592, end = 1489808;
         try {
             RandomAccessFile in = new RandomAccessFile(f, "r");
             in.seek(start);
             while(start <= end) {     
              line = in.readLine();     
              String[]entry = line.split(" ");
              String docid = entry[1];
              int tf = Integer.parseInt(entry[2]);
              start = in.getFilePointer();
         } catch(FileNotFoundException e) {
             e.printStackTrace();
         } catch(IOException ioe) {
             ioe.printStackTrace();
    public void inputStream() {
         String line = "";
         int startByte = 1385592, endByte = 1489808;
         try {
             FileInputStream stream = new FileInputStream(f);
             DataInputStream in = new DataInputStream(stream);
             in.skipBytes(startByte);
             int position = collectionSize - in.available();
             while(position <= endByte) {
              line  = in.readLine();
              String[]entry = line.split(" ");
              String docid = entry[1];
              int tf = Integer.parseInt(entry[2]);
              position = collectionSize - in.available();
         } catch(IOException e) {
             e.printStackTrace();
        }and the main looks like this:
       public static void main(String[]args) {
         DiskTest dt = new DiskTest();
         long start = 0;
         long end = 0;
         start = System.currentTimeMillis();
         dt.useRandomAccess();
         end = System.currentTimeMillis();
         System.out.println("Random: "+(end-start)+"ms");
         start = System.currentTimeMillis();
         dt.inputStream();
         end = System.currentTimeMillis();
         System.out.println("Stream: "+(end-start)+"ms");
        }The result:
    Random: 345ms
    Stream: 235ms
    Hmmm not the kind of result I was hoping for... or is it something I've done wrong?

  • Question regarding XI/PI and Idoc processing.

    Hi,
    I'm learning XI/PI and I have a question regarding Idoc processing in PI.
    We need to configure communication between our BW system and our PI system using Idocs.
    The Idocs are sent from BW to our PI systems and are then sent back to the BW system, there are no third system involved. The idocs are only between PI and BW.
    Our BW system is already connected with many other R3 systems by using WE20 / WE21 and RFC's and everything works perfectly.
    While I configure this communication between BW and PI it seems that PI is passing the Idoc to the Idoc adapter, converts it to xml and tries to find a receiver for the particular Idoc. I see the error "NO_RECEIVER_CASE_ASYNC" in SXMB_MONI.
    Is this a normal behaviour in PI ? Why is PI thinking that the Idoc needs to be sent to another system when it is infact for itself ??
    Thanks and regards
    Remi

    Hi
    for error "NO_RECEIVER_CASE_ASYNC" in SXMB_MONI.
    This problem may occur due to one of following reasons, so check
    1 service is active in message? transaction SICF and activate service sap/xi/engine (right click, activate)
    2 Is the port 8001 defined in the services on the smicm under services?
    3 Check the roles assign PIDIRUSER
    http://help.sap.com/saphelp_nw04/helpdata/en/56/361041ebf0f06fe10000000a1550b0/content.htm
    role: SAP_XI_ID_SERV_USER attached to it
    Also Check Whether PIDIRUSER has following role
    SAP_SLD_CONFIGURATOR
    SAP_XI_RWB_SERV_USER
    SAP_XI_RWB_SERV_USER_MAIN
    Regards
    Abhishek

  • Bw Question :regarding the versioning

    Hii All,
    I did post a question regarding to the versionining of the cube on friday 9th May and still i did not get any reply on that.Plz let me know.otherwise plz keep me posted that you are unable to reply to my question.
    My Question was :
    In the Versioning of the cube ,we give that version a particular name and select the value type of it as : 110 or 130 or 140. what is this value type .. ? what does this 110,130 or 140 really mean ?
    Why we need this value type.? and can we get some documents to read and explore this value type ?. Plz help.
    Thanks & regards ,
    Madhavi S Bichakal

    Hi Madhavi,
    Basically in BW you'll find two characteristics used for versioning:
    - Version: Used to create different versions of the information
    - Value type: used to indicate what the information means.
    Examples:
    Version 000 is usually Plan/Actual data (the final version). Then, for version 000 you will have different value types, like 010 = Actual, 020 = Plan, 030 = Target, etc..
    Then you can have different versions (001, 002, 003) that are used in the planning process. You start with version 001, then you can move to 002, 003,... and when you have the final Plan, you move to 000.
    That's the usual usage of version / value type.
    but, you can use it as you want. The only problem that you can have is that if you rename the description of a value type, and then you activate a BCT that generates data for that value type and the description will be incorrect.
    From what you said, you are using values from 100 and above, SAP uses up to 90 from what i've seen, so you won't have any problems.
    Hope this clarifies.
    Regards,
    Diego

  • Need help regarding iphoto library

    Hi..I am new to mac..I have a question regarding iphoto..i have photos organised in folders on a hard drive..i've realised that when i import them to iphoto keeping the preference of copying photos to iphoto library, it will be a wastage of space..the photos will take up the space in iphoto library as well as on hard drive..what is the better way to oragnise ? is it advisable to import them but not to copy them onto iphoto library? or if i import them to library and delete them from hard drive, can i get access to files if i need to?? where do i keep them?
    pls help

    vashi
    Bluntly: copy the files to the iPhoto Library Folder and trash your own file system.
    Why?
    1. Import and deleting pics are more complex procedures
    2. You cannot move or rename the files on your system or iPhoto will lose track of them
    3. Most importantly, migrating to a new disk or computer can be much more complex.
    Always allowing for personal preference, I've yet to see a good reason to run iPhoto in referenced mode unless you're using two photo organisers.
    Accessing you files for other apps is easy: pick from one, more or all of these:
    There are three ways (at least) to get files from the iPhoto Window.
    1. *Drag and Drop*: Drag a photo from the iPhoto Window to the desktop, there iPhoto will make a full-sized copy of the pic.
    2. *File -> Export*: Select the files in the iPhoto Window and go File -> Export. The dialogue will give you various options, including altering the format, naming the files and changing the size. Again, producing a copy.
    3. *Show File*: Right- (or Control-) Click on a pic and in the resulting dialogue choose 'Show File'. A Finder window will pop open with the file already selected.
    To upload to MySpace or any site that does not have an iPhoto Export Plug-in the recommended way is to Select the Pic in the iPhoto Window and go File -> Export and export the pic to the desktop, then upload from there. After the upload you can trash the pic on the desktop. It's only a copy and your original is safe in iPhoto.
    This is also true for emailing with Web-based services. If you're using Gmail you can use THIS
    If you use Apple's Mail, Entourage, AOL or Eudora you can email from within iPhoto.
    If you use a Cocoa-based Browser such as Safari, you can drag the pics from the iPhoto Window to the Attach window in the browser. Or, if you want to access the files with iPhoto not running, then create a Media Browser using Automator (takes about 10 seconds) or use THIS
    Also, for 10.5 users: If you use the extended Open or Attach dialogue (with Column View) you can scroll to the bottom of the Shortcuts and find the Media browser there. Select any pic you want from there.
    Regards
    TD
    Regards
    TD

  • Questions regarding DTP

    Hello Experts
    I am trying to load data from ODS to a Cube and  have the following questions regarding DTP behaviour.
    1) I have set the extratcion mode of the DTP to delta as I understand that it behaves   like init with data transfer for the first time. However, it fetches the records only from the change log of the ODS. Then what about all the records that are in the active table. If it cannot fetch all the records from the active table then how can we term it as init with data transfer.
    2) Do I need to have to seperate DTPs -  One for full load to fetch all the data from the active table and another to fetch deltas from the change log.
    Thanks,
    Rishi

    1. When you choose the Delta as Extraction mode you get the data from only change log table.
    Change log table will contain all the records.
    Suppose when you run a load to DSO which contains 10 records and activate it. Now those 10 records would be available in Active table as well as Change log.
    Now in the Second load you have 1 new record and 1 changed record. When you activate, your active table will have 11 records. The change log will have before and after image records for the changed record along with 11 record.
    So The cube needs that images, so that data can't be mismatched with old data.
    2.If you run the full load to Cube from DSO you need to delete the old request after the load. which is not necessary in the previous case.
    In Bi 7.0 when you choose full load extraction mode you will have flexibility to load the data either from " Active Table or  Change log table".
    Thanks
    Sreekanth S

Maybe you are looking for