Multiple reads of the same data from Multiprovider by Query (Bex)

Hello, guys!
We're having issue with performance of a query built on Multiprivider. During our investigation, we've found out that within one run of a Query, it several times refers to InfoProvider for the same data (see image attached).
Do you have ideas what can be a reason for multiply reads of the same data from Multipvovider?

Hello Nikita,
By "copy of a query" i meant something like this as shown below :
*Kindly click on the screenshot for a better view.
1) See the highlighted portions below in the screenshot . See Query 2 highlighted and name of the BEx query highlighted.
2) See the highlighted portions . See Query 3 highlighted and name of the BEx query highlighted.
As you can see from the above screenshots i have used the same BEx query 2 times by the name of Query 2 & Query 3 . Infact i have not attached the complete screenhsot . In that i have used it 6 times.
I have to analyze this a bit in detail but what i am guessing is that when this WEBi is called the single BEx is also called multiple times. And hence it hits the Info Provider multiple times resulting in a decreased performance.
But this does not mean that this is wrong approach. There are various areas where you can improve for example :
1) Either improve your BEx query if possible or use aggregates or something like that .
2) Use the  Query stripping setting in WEBi so that unused dimensions and measures are not pulled resulting in an improved performance. It's switched on by default.
Thanks!!
Regards,
Ashutosh Singh

Similar Messages

  • Events cannot have multiple exceptions for the same date

    I just starting getting this message and could not sync to one of my Google calendars. I'm posting this for others who might get the same problem.
    I didn't find the answer on these forums but did find it on this thread on Google:
    http://www.google.com/support/forum/p/Calendar/thread?tid=241155f758d9e2a4&hl=en
    Here's the important excerpt:
    "I had a client, who just had this same issue, nothing to do with Google cals.
    It was apparently, in my best guess, a corruption of the subscribed cal.
    *I did a get info on the cal, copied the URL, deleted the cal, then re-subscribed to it by pasting in the URL, and now it's working fine*."

    I've been having the same problem with my iCal calendars and the "Events cannot have multiple exceptions for the same date" error. Once it gets going, it uses up a lot of the CPU and resources. After reinstalling iCal, all my calendars were missing and I could not even resubscribe to them.
    I took my MacBook Pro to the Apple Store, and they were able to solve the problem by moving some of the iCal files from their existing folders out to the desktop, and reopening the program. That got it working, however, now I'm having the same problem again. So back to square one. Anyone else having this issue and know the cause?
    My setup is my MacBook Pro uses Entourage, use that calendar in my iCal. And I subscribe to two calendars my wife publishes on her Macbook. We're both using Snow Leopard.

  • Is there a way to delete multiple pictures at the same time from the iphone4s?

    Is there a way to deleter multiple pictures at the same time, from my iphone4s? I know how to delete one at a time. Thanks

    Open your Photos App > Camera Roll > At the top right corner you will see a rectangle with a right arrow, select that. Now you can select as many photos as you want and you can hit the red Delete button on the bottom right.

  • How can I remove multiple copies of the same song from the iTunes listing?

    How can I remove multiple copies of the same song from the iTunes listing. The program seems to be picking up the same songs from, for example, my user area and my public area in the C drive

    As above, Apple's official advice is here... HT2905 - How to find and remove duplicate items in your iTunes library, however it is a manual process and the article fails to explain some of the potential pitfalls.
    Use Shift > View > Show Exact Duplicate Items to display duplicates as this is normally a more useful selection. You need to manually select all but one of each group to remove. Sorting the list by Date Added may make it easier to select the appropriate tracks, however this works best when performed immediately after the dupes have been created.  If you have multiple entries in iTunes connected to the same file on the hard drive then don't send to the recycle bin.
    Use my DeDuper script if you're not sure, don't want to do it by hand, or want to preserve/merge ratings, play counts and playlist membership. See this thread for background and please take note of the warning to backup your library before deduping.
    (If you don't see the menu bar press ALT to show it temporarily or CTRL+B to keep it displayed)
    tt2

  • Synchronized my iPhone 3G 8G to copy the contacts and calendars from Outlook, I wonder if I can do the same synchronization to copy the same data from Gmail

    Synchronized my iPhone 3G 8G to copy the contacts and calendars from Outlook, I wonder if I can do the same synchronization to copy the same data from Gmail ?

    Right, I finally managed to get it sorted out.
    iCloud only accept version 3.0 vcards, and the one I was using were version 2.1 so that's why it wasn't picking it up. So the easy way to get that sorted out is, use a gmail account.
    I know you don't wanna do it because you think it's too much hassle and stuff, but trust me it only takes 5 minutes and that's it.
    1. Create a Gmail Account.
    2. Export your Old VCard files to the Gmail Account.
    3. Now Import them from Gmail to your PC again.
    And, that's it, that's just makes the new imported version in 1 file contains your all contacts in version 3.0. Now you can just upload that on icloud and then sync it with your iphone.
    That's what I did and it worked, and I am sure if you wanna replace this file in the Contacts folder under your User Account in Windows and then try to sync Contacts in Tunes, it should work, but as I said, I did it with iCloud and it worked for me. So aye, that's pretty much it. Phewwww..
    Been searching for it for the whole day and it took 5 minutes in the end, badass...
    Anyway, don't lose hope and always Google for everything!

  • Load multiple files using the same data load location

    has anybody tried loading multiple files using the same load locations. I need to do this as the data in these multiple files will need to be exported from FDM as a single export file. the problem i am facing is more user related. since these files will be received at different points of time, users will need a way to tell them what has been loaded and what is yet to be loaded.
    is it possible to throw a window on the web broser with OK and Cancel buttons from an event script?
    any pointers to possible solutions will be helpful

    was able to resolve this. the implementation method is as follows
    take a back up of previously imported data in the befcleardata event script. then in the beffileimport event append the data to the import file. there are many other intricacies but this is the broad implementation logic. it allowed my users to load multiple files without worrying about append or replace import type choices

  • Why does PSE 10 Organizer jumble up photos on the same date from different locations ?

    I have PSE 10 installed on a PC with Windows 7. My camera is a Nikon D90 using a Sandisk 8 gb SD card. When I take photos at different locations on the same date and download them into the Organizer , instead of keeping the photos from the different locations together by location, it jumbles them all up. It does not keep them in order by time taken from first to last for that day , it just mixes them all up in random order. Why ?

    Hi Lyndy,
    When you use Albums and Keyword Tags, you aren't moving the images around (they stay in their folders) - you just look at them differently.
    What you can try is this:-
    1) select one of your folders in folder view so that it displays all of those images in filename order
    2) click on the instant album button (to the top right of the thumbnails)
    This will generate an album with the same name as the folder
    3) Now switch to Thumbnail view
    4) click on the new albumb name on the right side
    Now, all the images should be in date/time order - you may have adjust the options
    The real power of the Keywrd Tags is the many different ways you can look at the images.
    If you have a Keyword Tag structure like this:-
    Places
         Scotland
                Holyrood
                Britania
    Then if you assign the Holyrood and britania tags to the appropiate photos, there are various ways of viewing the photos.
    Selecting just Holyrood would show only the Holyrood ones
    Selecting Scotland would show both Holyrood and Britania ones.
    The only limit seems to be your own imagination
    I hope that gives you ideas rather than adding confusion
    Brian

  • Importing multiple rows from the same date from one table to another.

    I need to pull information from one sheet(Sheet 1) to another (Sheet 3). I am able to pull the first line of info with VLookup but need all rows for a specific date, which could range from zero to 10 rows depending on the day according to the date in cell G1 on Sheet 3. I am importing the needed information from Sheet 2 with vlookup, but since it is information from one cell to another its not an issue. Is there a way to transfer the needed data?

    Hello
    Here's another method to build a summary table, which calculates index of every row in source data matching given key and use the indices to retrieve rows of data.
    E.g.,
    Data (excerpt)
    A1  date
    A2  2015-03-12
    A3  2015-03-12
    A4  2015-03-12
    A5  2015-03-12
    B1  a
    B2  A
    B3  B
    B4  C
    B5  D
    C1  b
    C2  1
    C3  2
    C4  3
    C5  4
    Summary (excerpt)
    A1  a
    A2  =IF($D2<>"",INDEX(Data::B,$D2,1),"")
    A3  =IF($D3<>"",INDEX(Data::B,$D3,1),"")
    A4  =IF($D4<>"",INDEX(Data::B,$D4,1),"")
    A5  =IF($D5<>"",INDEX(Data::B,$D5,1),"")
    B1  b
    B2  =IF($D2<>"",INDEX(Data::C,$D2,1),"")
    B3  =IF($D3<>"",INDEX(Data::C,$D3,1),"")
    B4  =IF($D4<>"",INDEX(Data::C,$D4,1),"")
    B5  =IF($D5<>"",INDEX(Data::C,$D5,1),"")
    C1  2015-03-11
    C2 
    C3 
    C4 
    C5 
    D1  index
    D2  =IFERROR(MATCH(C$1,Data::A,0),"")
    D3  =IFERROR(MATCH(C$1,OFFSET(Data::A,D2,0,ROWS(Data::A)-D2,1),0)+D2,"")
    D4  =IFERROR(MATCH(C$1,OFFSET(Data::A,D3,0,ROWS(Data::A)-D3,1),0)+D3,"")
    D5  =IFERROR(MATCH(C$1,OFFSET(Data::A,D4,0,ROWS(Data::A)-D4,1),0)+D4,"")
    Notes.
    Formula in A2 and B2 can be filled down.
    Formula in D3 can be filled down. Note that D2 has different formula than D3.
    Tables are built in Numbers v2.
    Hope this may help you to get the basic idea.
    H

  • Multiple users accessing the same data in a global temp table

    I have a global temp table (GTT) defined with 'on commit preserve rows'. This table is accessed via a web page using ASP.NET. The application was designed so that every one that accessed the web page could only see their data in the GTT.
    We have just realized that the GTT doesn't appear to be empty as new web users use the application. I believe it has something to do with how ASP is connecting to the database. I only see one entry in the V$SESSION view even when multiple users are using the web page. I believe this single V$SESSION entry is causing only one GTT to be available at a time. Each user is inserting into / selecting out of the same GTT and their results are wrong.
    I'm the back end Oracle developer at this place and I'm having difficulty translating this issue to the front end ASP team. When this web page is accessed, I need it to start a new session, not reuse an existing session. I want to keep the same connection, but just start a new session... Now I'm losing it.. Like I said, I'm the back end guy and all this web/connection/pooling front end stuff is magic to me.
    The GTT isn't going to work unless we get new sessions. How do we do this?
    Thanks!

    DGS wrote:
    I have a global temp table (GTT) defined with 'on commit preserve rows'. This table is accessed via a web page using ASP.NET. The application was designed so that every one that accessed the web page could only see their data in the GTT.
    We have just realized that the GTT doesn't appear to be empty as new web users use the application. I believe it has something to do with how ASP is connecting to the database. I only see one entry in the V$SESSION view even when multiple users are using the web page. I believe this single V$SESSION entry is causing only one GTT to be available at a time. Each user is inserting into / selecting out of the same GTT and their results are wrong.
    I'm the back end Oracle developer at this place and I'm having difficulty translating this issue to the front end ASP team. When this web page is accessed, I need it to start a new session, not reuse an existing session. I want to keep the same connection, but just start a new session... Now I'm losing it.. Like I said, I'm the back end guy and all this web/connection/pooling front end stuff is magic to me.
    The GTT isn't going to work unless we get new sessions. How do we do this?
    Thanks!You may want to try changing your GTT to 'ON COMMIT DELETE ROWS' and have the .Net app use a transaction object.
    We had a similar problem and I found help in the following thread:
    Re: Global temp table problem w/ODP?
    All the best.

  • AE not reading all the XMP data from DNG files

    I'm working on a very large time-lapse for a construction company - I have a sunset that is broken between two folders and rendering the folders seperately makes Quicktime unhappy during playback and I get an ugly jump in sunset exposure.
    So I combined the two folders in Lightroom (LR) from my RAW files (NEF) and exported the entire sequence as DNG files with simpler numbering. The sequence is about 1560 files and constitutes about 22 GB of data. I created a new LR library from the DNG sequence to make sure it all looked good and it does.
    I imported the same DNG sequence into AE, checked a few frames in the preview and sent it out to render. Two thirds of the way through the QT movie, exported in H.264 at 30 fps, I start seeing problems. Huge sections look like the XMP data has not been read, then there's a good section and then bad again. I matched the movie time to preview in AE and indeed saw the same problems in the AE preview. So the problem begain in AE's reading the files from the folder, not in rendering.
    I went back to LR for file comparison. There was no problem in LR. All the DNGs look great.
    I'm on a MacBookPro Retina with 2.8 GHz Intel processor, 16GB 1600 MHz DDR3 Ram and NVIDIA GeForce GT 650M 1024 MB graphics
    I have the standard 69GB cache on an external drive and have 11GB RAM available for AE.
    Installed CPUs =8
    CPUs reserved for other apps = 2
    RAM allocation per background CPU = 1GB
    Actual CPUs that will be used = 6
    I have the latest AE from the Adobe Cloud, although I haven't done the most recent update yet.
    My thought was to break the folder up into smaller segments for rendering, but that's not the problem. I see the XMP hasn't been consistantly read in preview. To me it doesn't make sense that the folder is too large for preview because it's not even rendering it there, just reading the files...?
    Any insight, solutions?
    Thanks,
    Dennis

    Moominman wrote:
    I am basically trying to export xmp files from a set of low resolution dng files so that I can access my Lightroom edits in the RAW files. I have separated the RAW and dng files in different folders
    Hi Andy,
    I dunno how best to get extracted xmp files into the raw folders, but if you are comfortable with exiftool, you can use it to extract xmp sidecars from DNG files.
    If you want a turn-key solution which does not required you to futz with exiftool, then consider a free plugin I wrote:
    robcole.com - xEmP
    It will allow you to create xmp sidecars with all your DNG adjustments and metadata (which can then be applied to the non-dng raw files).
    However, if you won't need the DNGs in your catalog afterward, then the easiest way is to convert them back to proprietary raw format using this plugin (also free, and I wrote it):
    robcole.com - UnDNG
    Conceptually, you can think of it as converting the DNGs to proprietary raw format, but note: it doesn't convert anything, it just allows existing raw files that are NOT in the catalog, to replace the DNGs that are in the catalog. All adjustments and metadata and everything else will be preserved (just like when you convert a proprietary raw to DNG format).
    Rob

  • Please help! Multiple users accessing the same data sets

    Hi all,
    Can anyone provide a bit of insight in to a question I have?
    We have two users that require to see the same set of data in the BPC Excel interface at the same time. The information is employees and date.
    User 1 would like to see All Employee SignedData for 1 month, and User 2 would like to see just a slice of the Employees for 1 month.
    If both of the Users are logged in at the same time, what will happen in terms of SAP 'locking' the data set? I am aware of Data Access Profiles to restrict their access to particular Master Data but there will be the requirement for users to see (maybe just in read-only), data that is shared between both Users.
    Will it throw up an error or can I make it so that users have 'read only' access?
    Any advice would be very much appreciated!
    Nick

    Hi Nick,
    No issue with that at all.
    They can even both have write access. If they try to update the exact same record at the same time BPC will just keep writing Delta records.
    User A enters 10
    User B enters 20
    User A refreshes and will get 20
    User B refreshes and also gets 20

  • Concurrent multiple requests to the same servlet from same client

              We are using weblogic as our web and app. server. We are using the weblogic oracle pool for database connections. We have jsp and servlets in the second tier with EJB on the third tier.
              We have a problem like this. We have set the oracle pool max size to 40. Some of our database searches takes about 30 seconds. If a user submit a request for such a search, it takes about 30 seconds to respond to the client.
              In the mean time if the user submits the same request again and again (by clicking the URL link in the HTML page continuosly for 50 times), the servlet takes each request, creates new thread for each request, and each thread uses a connection. So our pool runs out of connections and further to that, we get 'resource unavailabe exception' or 'pool connection failed' etc.
              All the users hang. Some times it recover back, but most of the times the server crashes.
              We have not set any time out for pool connection waiting time. By default weblogic keeps the threads waiting for connection indefinitely.
              So, now if somebody want to crash our site, simply they can go and hit a database search link (which takes about 30 secs) 50 to 100 times, and out site will go down.
              What is the good solution for this. I think this is a common problem which many people should have solved. One way is to find and block the user who is hitting many times.
              Any better solutions pl.?
              regards
              sathish
              

              "Cameron Purdy" <[email protected]> wrote in message
              news:[email protected]...
              > There are other ways to do the processing besides JMS, but the main idea
              is
              > this: DO NOT DO IT ON A WL EXECUTE THREAD IN YOUR WEB SERVER -- those are
              > for your HTTP requests and you don't want to use them up. You can use RMI
              > and have your RMI object spin off a thread.
              Now we're going in circles. I've heard it repeatedly argued here that you
              don't
              ever want to do anything in the server that is not within a server execute
              thread.
              My "big process" needs to run on the server, since it manipulates EJB's
              etc.,
              in the same sense that JavaBeans launched from a JSP page run in the server.
              So, I just don't understand why now it's not ok to use execute threads, when
              I'm going to be initiating a thread of control in the server anyway.
              >
              > Here's the second issue: idempotency. Make sure you pre-assign an ID to
              > the task, and have that ID accompany the task as it runs, use that ID to
              > update a shared resource (perhaps db) to show the progress of the task,
              and
              > keep that ID in the shared resource afterwards so that the task is not
              > repeated unnecessarily (refresh) and so the user's request now shows that
              > the task is complete.
              >
              My solution associates an AsynchTask to object to a session scope JavaBean.
              So, for a given session, there can be only 1 task object, etc. Will this
              work?
              Thanks,
              Jason
              > --
              >
              > Cameron Purdy
              > [email protected]
              > http://www.tangosol.com
              > WebLogic Consulting Available
              >
              >
              > "Jason Rosenberg" <[email protected]> wrote in message
              > news:[email protected]...
              > > Cameron,
              > >
              > > A few questions....
              > >
              > > Is JMS the only way to "kick off the big process". Is there a way to
              > > launch another servlet, or god forbid, another thread, etc.?
              > >
              > > I'd rather not have to use JMS right now, due to time constraints
              > > (it's another thing to have to figure out...).
              > >
              > > Is it necessary to use javascript to redirect? Can't we just use
              > > a simple meta refresh tag, which causes the same jsp to be
              > > hit repeatedly, and which will keep resending the html with
              > > the meta-refresh until the "big process" has completed?
              > >
              > > Also, if we have a jsp which uses a bean with session scope,
              > > don't we then get built in "uid" tracking? The bean instantiated
              > > will necessarily be of the current session, it seems, as long as the
              > > user keeps the same browser window open (or does resending
              > > cause a new session to be started--I didn't think so....).
              > >
              > > Can you elaborate on how the completed process information can
              > > be shared back to the session, and then returned to the browser, etc.?
              > >
              > > Jason
              > >
              > >
              > >
              > >
              > >
              > > "Cameron Purdy" <[email protected]> wrote in message
              > > news:[email protected]...
              > > > This cut & paste feature is getting handy ..
              > > >
              > > > --
              > > >
              > > > 1) The work to be done is assigned a uid or something similar to
              > prevent
              > > it
              > > > from being done twice
              > > > 2) The user presses the button which passes the uid in a hidden field
              > > (for
              > > > example)
              > > > 3) The serlvet responds by kicking off the big process with jms and
              > sends
              > > > back a page that displays a "processing..." message and uses
              javascript
              > to
              > > > redirect (with a place to click just in case javascript is turned off)
              > > > 4) The url redirected to includes the uid to identify the process for
              > > which
              > > > the result is desired
              > > > 5) When the process is completed, the information is placed in some
              > known
              > > > location (e.g. HttpSession or database) and the pending request to
              find
              > > the
              > > > result can return the result
              > > >
              > > > --
              > > >
              > > > Cameron Purdy
              > > > [email protected]
              > > > http://www.tangosol.com
              > > > WebLogic Consulting Available
              > > >
              > > >
              > > > "sathish kumar" <[email protected]> wrote in message
              > > > news:[email protected]...
              > > > >
              > > > > We are using weblogic as our web and app. server. We are using the
              > > > weblogic oracle pool for database connections. We have jsp and
              servlets
              > in
              > > > the second tier with EJB on the third tier.
              > > > >
              > > > > We have a problem like this. We have set the oracle pool max size to
              > 40.
              > > > Some of our database searches takes about 30 seconds. If a user submit
              a
              > > > request for such a search, it takes about 30 seconds to respond to the
              > > > client.
              > > > > In the mean time if the user submits the same request again and
              again
              > > (by
              > > > clicking the URL link in the HTML page continuosly for 50 times), the
              > > > servlet takes each request, creates new thread for each request, and
              > each
              > > > thread uses a connection. So our pool runs out of connections and
              > further
              > > to
              > > > that, we get 'resource unavailabe exception' or 'pool connection
              failed'
              > > > etc.
              > > > > All the users hang. Some times it recover back, but most of the
              times
              > > the
              > > > server crashes.
              > > > > We have not set any time out for pool connection waiting time. By
              > > default
              > > > weblogic keeps the threads waiting for connection indefinitely.
              > > > > So, now if somebody want to crash our site, simply they can go and
              hit
              > a
              > > > database search link (which takes about 30 secs) 50 to 100 times, and
              > out
              > > > site will go down.
              > > > > What is the good solution for this. I think this is a common problem
              > > which
              > > > many people should have solved. One way is to find and block the user
              > who
              > > is
              > > > hitting many times.
              > > > > Any better solutions pl.?
              > > > > regards
              > > > > sathish
              > > >
              > > >
              > >
              > >
              >
              >
              

  • How can I stop multiple downloads of the same email from sympatico.ca?

    Sympatico email has been set up with POP enabled and Do What the App says when deleting downloaded messages.
    TBird Account has been set up with server settings Leave Messages On Server unchecked.
    Starting a month or so ago, a second copy of an email, already received in my Inbox, would be received about 5 or 10 minutes after the first. Starting this AM, I received a third copy of the same email.
    Thanks for any help - this is starting to get annoying!

    http://kb.mozillazine.org/Duplicate_messages_received

  • Multiple users updates the same data - RowInconsistentException

    Hi,
    I'm using JDeveloper 11.1.2.1
    Locking mode: optimistic
    Scenario:
    - Have 2 users (user 1 & user 2) running application x
    - Both users updates the same record
    - user 1 hits save first (and hence no error)
    - user 2 hits save after user 1, and gets RowInconsistentException
    I have managed to trap the exception in the EntityImpl class:
    public void lock() {
    try {
    super.lock();
    catch (RowInconsistentException ex) {
    this.refresh(REFRESH_UNDO_CHANGES);
    super.lock();
    But what this does is that it just refreshed the entities and removed user 2's work without notification, which isn't acceptable.
    Instead of this, is it possible to display an error message in user 2's UI (instead of the stack error) , refresh the entities, but keep user's 2 work, and possibly recommit?
    Thank You
    Regards,
    Andi

    Andi,
    , is it possible to display an error message in user 2's UI (instead of the stack error)You can customise the error handling, yes, to display a different message if you like (check out the Fusion Developer's Guide to find out how)
    refresh the entities, but keep user's 2 workNot sure what you mean there
    By default (at least it used to be this way, haven't checked recently), if you commit again after receiving the "row inconsistent" error, it will save user 2's changes (potentially overwriting user 1's changes)
    John

  • Accessing the same data from multiple threads

    Hi
    In the following program the task5 routine takes ~3s to complete, when I uncomment the t2 lines it takes 11s (this is on a quad-core x86/64 machine). Since there is no explicit synchronization I was expecting 3s in both cases.
    public static int sdata;
    public static void Task5()
    int acc = 0;
    for (int i = 0; i < 1000000000; ++i)
    sdata = i;
    acc += sdata;
    [STAThread]
    static void Main()
    Stopwatch sw = new Stopwatch();
    sw.Start();
    Thread t1 = new Thread(new ThreadStart(Task5));
    // Thread t2 = new Thread(new ThreadStart(Task5));
    t1.Start();
    // t2.Start();
    t1.Join();
    // t2.Join();
    sw.Stop();
    System.Diagnostics.Debug.WriteLine(sw.ElapsedMilliseconds.ToString());
    Why are these threads blocking each other?

    This is loosely a duplicate of https://social.msdn.microsoft.com/Forums/en-US/cd00284d-3da3-457e-8926-c490e7ca6d92/atomic-loadstore?forum=vclanguage
    I answered you in detail over at the other thread.
    But the short version is that the threads are competing for access to system memory, specifically at the memory location of sdata.  This demonstrates how to spoil the benefits of not having to write-through from your CPU cache to system memory.  CPU
    Caches are wonderful things.  CPU cache memory is WAY faster than system memory.

Maybe you are looking for

  • Error in code compiling

    Hi this is the error i am getting while compiling C:\>javac SessionListener.java SessionListener.java:17: <identifier> expected     private static Map<String,HttpSession> sessionMX = new HashMap<String,HttpSe ssion>();                       ^ Session

  • Unmountable Boot Volume, need help. trying to fix the problem via lan if posible?

    any information i could recieve from Geek Squad would be appreciated.  i have my computer purchased from best buy that i'm happy with, but my cousin's computer died with the unmountable boot volume error.  i want to use my computer to fix his, or if

  • File Receiver Adapter:Can't write a large xml file completely

    Hi Folks I am doing an IDOC - Xi - FILE scenario where i am writing the xml file as it is onto XI directory. The problem i m facing is - the adapter does not write complete file. If i check from AL11 the file i am getting is not exceeding length 660.

  • Using Front Panel Composite Video 2 RCA Jack on the HP m8000 Media Center PC

    I have a HP M8000 Computer with front panel RCA jacks, composite video 2 and Left and Right Audio RCA jacks.  I have been unable to determine how to select  these jacks for inputting video recordings to the computer.  Any ideas on how to use these ja

  • Ipod Connection problem (XP): This worked for me -- no joke

    I have had the same problems that many of you had: Ipod was working fine, then it wouldn't update. Windows recognized the device, but itunes and iupdater did not. I was near despair, especially when going through all the $%*# that you all did (5 R's,