Analysing Large ST01 Files

Morning All,
Have been performing blocks of traces for some new process activities for Portal in our R/3 Dev system using the trace transaction code ST01.
Some of these trace files are very large, over 10,000 lines when downloaded into excel etc.  Please SAP Security gurus can anyone tell me if their is an easy way or a tool to make it easy to analyse large ST01 files so that I dont have to check hundreds of duplicate authorisation lines.
Thanks
Steve

You may want to reduce the amount of time you activate the trace for. Also, once you get the file into excel, since the files are very often the same auth object and values - you have use a filter with a "unique records only" to get only 1 unique value for each auth object and value combination. In excel, select the columns with the auth object and value and then go to data \ advanced filter. You will see the option for "unique records only".

Similar Messages

  • Analyse large heap dump file

    Hi,
    I have to analyse large heap dump file (3.6GB) from production environment. However if open it in eclipse mat, it is giving OutOfMemoryError. I tried to increase eclipse workbench java heap size as well. But it doesnt help. I also tried with visualVM as well. Can we split the heap dump file into small size? Or is there any way to set max heap dump file size for jvm options so that we collect reasonable size of heap dumps.
    Thanks,
    Prasad

    Hi, Prasad
    Have you tried open in 64-bit mat on a 64-bit platform with large heap size and CMS gc policy in MemoryAnalyzer.ini file ? the mat is good toolkit on analysing java heapdump file, if it cann't works, you can try Memory Dump Diagnostic for Java(MDD4J) in 64-bit IBM Support Assistant with large heap size.

  • I want to load large raw XML file in firefox and parse by DOM. But, for large XML file the firefox very slow some time crashed . Is there any option to increase DOM handling memory in Firefox

    Actually i am using an off-line form to load very large XML file and using firefox to load that form. But, its taking more time to load and some time the browser crashed. through DOM parsing this XML file to my form. Is there any option to increase DOM handler size in firefox

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Re: Compiling large pgf file?

    Hi Gilbert,
    I didn't find any solution around this problem. Forte's take was
    that its
    Microsoft's compiler problem and Microsoft's take was that we are doing
    something adnormal i.e. Why we are compiling such a big project, doesn't
    sound good. Anyway now we have switched to Sun Sloaris Platform and I
    think
    its not an issue.
    --Shahzad
    Gilbert Hamann wrote:
    Shazad,
    Back in October of 1997 you posted a request for help to the
    Forte-Users
    list regarding the compilation of a large (12MB) .pgf file. Following
    the
    thread, I didn't see if you had any success.
    We are following the same path, and are trying to compile a 22 MB .pfg
    file.
    For the future we are rearchitecting our application to make this more
    manageable, but for now, getting this to compile would help us a great
    deal.
    The same as you, we are getting:
    fatal error LNK1141, failure during build of exports error during
    compilation, aborting
    Did you manage to work around this problem?
    Any help would be much appreciated.
    Thanks,
    -gil
    Gilbert Hamann
    Team Lead, Energy V6 Performance Analysis
    Descartes Systems Group, Inc.
    [email protected]
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Thanks folks.
    I would like to try the suggested workaround at a future point in time.
    One of our initial attempts was similar, but consisted of splitting the .obj
    files into two libraries (using lib.exe) and then linking these two
    libraries to main.obj. It might have worked but we then had another problem
    (our own) which I didn't solve until later.
    The main solution for us seems to be to use MSVC 6.0 (instead of 5.0). When
    there are no other problems in our code, setup, or environment, the compile
    and link works flawlessly, albeit slowly.
    -gil
    -----Original Message-----
    From: Fiorini Francesco [mailto:[email protected]]
    Sent: October 19, 1998 9:43 AM
    To: 'cshahzad'; Gilbert Hamann; [email protected]
    Subject: R: Compiling large pgf file?
    Hello chaps,
    presently here in Ds we have a workaround for that problem
    here are the steps
    1) when fcompile goes belly up, it produces a linkopt.lrf file containing
    the link command
    2) save it (i.e. linkopt.old), just to have a backup copy 
    3) edit the original linkopt file by adding the /VERBOSE option to the link
    statement. That thing will tell the linker to generate more stuff.
    4) from the command line, issue a
            link @linkopt.lrf > mylink.log
    5) edit mylink.log, and copy the statements following the lib.exe statement
    into
       a new file called, i.e., lib.lrf. Make sure the lib.exe line won't be
    included into the file
    6) do, from the prompt,
            lib @linkopt2.lrf
    Once done, you should have a .LIB and .EXP file into the codegen directory.
    at that point, re-issue the
    link @linkopt.old
    command.
    You should get the executable.
    Bye
    Francesco Fiorini
    sys.management dept.
    ds data systems s.p.a
    43100 Parma - Italy
    tel +39-05212781
    -----Messaggio originale-----
    Da: cshahzad [ <a href=
    "mailto:[email protected]">mailto:[email protected]</a> <<a
    href=
    "mailto:[email protected]">mailto:[email protected]</a>> ]
    Inviato: Monday, October 05, 1998 3:25 PM
    A: Gilbert Hamann; [email protected]
    Oggetto: Re: Compiling large pgf file?
    Hi Gilbert,
        I didn't find any solution around this problem. Forte's take was
    that its
    Microsoft's compiler problem and Microsoft's take was that we are doing
    something adnormal i.e. Why we are compiling such a big project, doesn't
    sound good. Anyway now we have switched to Sun Sloaris Platform and I
    think
    its not an issue.
    --Shahzad
    Gilbert Hamann wrote:
    Shazad,
    Back in October of 1997 you posted a request for help to the
    Forte-Users
    list regarding the compilation of a large (12MB) .pgf file.  Following
    the
    thread, I didn't see if you had any success.
    We are following the same path, and are trying to compile a 22 MB .pfg
    file.
    For the future we are rearchitecting our application to make this more
    manageable, but for now, getting this to compile would help us a great
    deal.
    The same as you, we are getting:
    fatal error LNK1141, failure during build of exports error during
    compilation, aborting
    Did you manage to work around this problem?
    Any help would be much appreciated.
    Thanks,
    -gil
    Gilbert Hamann
    Team Lead, Energy V6 Performance Analysis
    Descartes Systems Group, Inc.
    [email protected]
    To unsubscribe,  email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL: http://pinehurst.sageit.com/listarchive/
    <http://pinehurst.sageit.com/listarchive/> >
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

  • Signal Express Large TDMS File Recording Error

    Hello,
    I have the following application and I am looking for some tips on the best way to approach the problem with Signal Express:
    I am attempting to using Signal Express 2009 (Sound and Vibration Assistant) to collect random vibration data on three channels over an extended period of time -- about 20 hours total.  My sample rate is 2kHz.  Sampling at that rate over that period of time invovles the creation of a very large TDMS file, which is intended for various types of analysis in signal express later or some other application later on.  One of the analysis functions to be done is a PSD (Power Spectral Density) plot to determine the vibration levels distributed over a band of frequencies during the log. 
    My original solution was to collect a single large TDMS file.  I did this with Signal Express recording options configured to save and restart "in current log" after 1 hour worth of data is collected.  I configured it this way because if there is a crash/sudden loss of power during data collection, I wanted to ensure that only up to an hours worth of data would be lost.  I tested this option and the integrity of the file after a crash by killing the SignalExpress process in the middle of recording the large TDMS file (after a few save log file conditions had been met).  Unfortunately, when I restart signal express and try to load the log file data in playback mode an error indicating "TDMS Data Corrupt" (or similiar) is displayed.  My TDMS file is large, so it obviously contains some data; however, Signal Express does not index its time and I can not view the data within the file.  The .tdms_index file is also present but the meta data.txt file is not generated.  Is there any way to insure that I will have at least partially valid data that can be processed from a single TDMS file in the event of a crash during mid-logging?   I don't have too much experience dealing with random vibration data, so are there any tips for generating vibration level PSD curves for large files over such a long time length?
    My solution to this problem thusfar has been to log the data to seperate .TDMS files, about an hour in length each.  This should result in about 20 files in my final application.  Since I want to take a PSD, which ends up being a statistical average over the whole time period. I plan on generating a curve for each of these files and averaging all 20 of them together to get the overall vibration PSD curve for the 20 hour time period.

    JMat,
    Based on the description of your application, I would recommend writing the data to a "new log" every hour (or more often). Based on some of my testing, if you use "current log" and S&V Assistant crashes, the entire TDMS file will be corrupted. This seems consistent with what you're seeing.
    It would be good if you could clarify why you're hoping to use "current log" instead of "new log". I'll assume an answer so I can provide a few more details in this response. I assume it's because you want to be able to perform the PSD over the entire logged file (all 20 hours). And the easiest way to do that is if all 20 hours are recorded in a continuous file. If this is the case, then we can still help you accomplish the desired outcome, but also ensure that you don't lose data if the system crashes at some point during the monitoring.
    If you use "new log" for your logging configuration, you'll end up having 20 TDMS files when the run is complete. If the system crashes, any files that are already done writing will not be corrupted (I tested this). All you need to do is concatenate the files to make a single one. If this would work for you, we can talk about various solutions we can provide to accomplish this task. Let me know.
    Now there is one thing I want to bring to your attention about logging multiple files from SignalExpress, whether you use "current log' or "new log". The Windows OS is not deterministic. Meaning that it cannot guarantee how long it takes for an operation to complete. For your particular application, this basically means that between log files there will be some short gap in time that the data is not being saved to disk. Based on my testing, it looks like this time could be between 1-3 seconds. This time depends heavily on how many other applications Windows has running at the same time.
    So when you concatenate the signals, you can choose to concatenate them "absolutely", meaning there will be a 1-3 second gap between the different waveforms recorded. Or you can concatenate them to assume there is no time gap between logs, resulting in a pseudo-continuous waveform (it looks continuous to you and the analysis routine).
    If neither of these options are suitable, let me know.
    Thanks, Jared 

  • Analyzing large TDMS files

    I am developing an application that will measure an AC voltage at 100KHz for 4 hours. I need to calculate the voltage and frequency drift during 30 second intervals. My plan is to parse the data into small chunks (typically 1 second), calculate the RMS voltage and frequency, and determine the magnitude of drift that may be occurring. I have been able to collect data during short time intervals and analyzed it with the Tone Measurement VI and  RMS VI, but am not sure how to break up large data files into smaller chunks for analysis and store the results.

    When you read a TDMS file, you can specify how many samples to read and where to start reading from.  All you need to do is read the data in chunks and process each chunk individually.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Problem reading and ploting large lvm files

    Hi everyone,
    I have some large .lvm files that I need to process offline; however, the files are quite large and I am regulalry getting "out of memory" messages (7 channels sampled at 4k for 15 minutes or so, maybe longer)... I have managed to reduce the rate this message occurs by first converting the lvm files to tdms, then potting the tdms data, but I still get the "out of memory" error message on somewhat regularly.. .I also down sample the data back to 2k, but that dont help a great deal.
    Any suggestions on how I can handle this data? I have read a number of online resources related to managing large data sets (e.g. http://www.ni.com/white-paper/3625/en/) but I am not sure how to implement these suggestions.
    Basically, I want to view the content of the ensure file... Then use queues to extract data subsets as selected into another while loop that will handle the analysis/processing (producer/consumer)... I do this regularly for smaller files; so the issue is mainly how to managing the large files... Decimating the data for the intial whole data plot may not work as I have spikes 10ms in width in some channels that I need to see in the main plot.
    Any help would be appreciated.
    Many thanks,
    Jack

    jcannon,  I did some quick math and I dont think you should be reaching the memory limit of LabVIEW.  However, it is possible that you are running out of contiguous memory on your computer while the program is running.  See this for a quick breif about contiguous memory.  
    If I were you I would try to reduce the number of times LabVIEW coppies information in memory.  Use show buffer allocations to find out where in your code you are making coppies of memory.  
    best of luck!

  • Arbitrary waveform generation from large text file

    Hello,
    I'm trying to use a PXI 6733 card hooked up to a BNC 2110 in a PXI 1031-DC chassis to output arbitrary waveforms at a sample rate of 100kS/s.  The types of waveforms I want to generate are generally going to be sine waves of frequencies less than 10 kHz, but they need to be very high quality signals, hence the high sample rate.  Eventually, we would like to go up to as high as 200 kS/s, but for right now we just want to get it to work at the lower rate. 
    Someone in the department has already created for me large text files > 1GB  with (9) columns of numbers representing the output voltages for the channels(there will be 6 channels outputting sine waves, 3 other channels with a periodic DC voltage.   The reason for the large file is that we want a continuous signal for around 30 minutes to allow for equipment testing and configuration while the signals are being generated. 
    I'm supposed to use this file to generate the output voltages on the 6733 card, but I keep getting numerous errors and I've been unable to get something that works. The code, as written, currently generates an error code 200290 immediately after the buffered data is output from the card.  Nothing ever seems to get enqued or dequed, and although I've read the Labview help on buffers, I'm still very confused about their operation so I'm not even sure if the buffer is working properly.  I was hoping some of you could look at my code, and give me some suggestions(or sample code too!) for the best way to achieve this goal.
    Thanks a lot,
    Chris(new Labview user)

    Chris:
    For context, I've pasted in the "explain error" output from LabVIEW to refer to while we work on this. More after the code...
    Error -200290 occurred at an unidentified location
    Possible reason(s):
    The generation has stopped to prevent the regeneration of old samples. Your application was unable to write samples to the background buffer fast enough to prevent old samples from being regenerated.
    To avoid this error, you can do any of the following:
    1. Increase the size of the background buffer by configuring the buffer.
    2. Increase the number of samples you write each time you invoke a write operation.
    3. Write samples more often.
    4. Reduce the sample rate.
    5. Change the data transfer mechanism from interrupts to DMA if your device supports DMA.
    6. Reduce the number of applications your computer is executing concurrently.
    In addition, if you do not need to write every sample that is generated, you can configure the regeneration mode to allow regeneration, and then use the Position and Offset attributes to write the desired samples.
    By default, the analog output on the device does what is called regeneration. Basically, if we're outputting a repeating waveform, we can simply fill the buffer once and the DAQ device will reuse the samples, reducing load on the system. What appears to be happening is that the VI can't read samples out from the file fast enough to keep up with the DAQ card. The DAQ card is set to NOT allow regeneration, so once it empties the buffer, it stops the task since there aren't any new samples available yet.
    If we go through the options, we have a few things we can try:
    1. Increase background buffer size.
    I don't think this is the best option. Our issue is with filling the buffer, and this requires more advanced configuration.
    2. Increase the number of samples written.
    This may be a better option. If we increase how many samples we commit to the buffer, we can increase the minimum time between writes in the consumer loop.
    3. Write samples more often.
    This probably isn't as feasible. If anything, you should probably have a short "Wait" function in the consumer loop where the DAQmx write is occurring, just to regulate loop timing and give the CPU some breathing space.
    4. Reduce the sample rate.
    Definitely not a feasible option for your application, so we'll just skip that one.
    5. Use DMA instead of interrupts.
    I'm 99.99999999% sure you're already using DMA, so we'll skip this one also.
    6. Reduce the number of concurrent apps on the PC.
    This is to make sure that the CPU time required to maintain good loop rates isn't being taken by, say, an antivirus scanner or something. Generally, if you don't have anything major running other than LabVIEW, you should be fine.
    I think our best bet is to increase the "Samples to Write" quantity (to increase the minimum loop period), and possibly to delay the DAQmx Start Task and consumer loop until the producer loop has had a chance to build the queue up a little. That should reduce the chance that the DAQmx task will empty the system buffer and ensure that we can prime the queue with a large quantity of samples. The consumer loop will wait for elements to become available in the queue, so I have a feeling that the file read may be what is slowing the program down. Once the queue empties, we'll see the DAQmx error surface again. The only real solution is to load the file to memory farther ahead of time.
    Hope that helps!
    Caleb Harris
    National Instruments | Mechanical Engineer | http://www.ni.com/support

  • Problems viewing large tif files in preview

    i'm having trouble viewing large tif files in preview. the tif files are on a cdrom - when i click on the file to open it, it opens only the first 250 pages of the file. the file has approximately 3,000 pages. how can i get preview to open the rest of the pages?
    thanks for any suggestions.
    mac mini   Mac OS X (10.4.6)  

    no trick - i didn't create the cdrom but it only has 3 tif files with approximately 3000 pages each, not 3000 large tif files. plus several smaller pdf files, but those aren't giving me any problems.
    i don't know whether they're compressed, but i still can't get more than the first 250 pages to open, even after copying the file to my desktop. if anyone has any other ideas, i'd much appreciate it.
    mac mini   Mac OS X (10.4.6)  

  • How can I send a large video file from my iPhone 4?

    How can I send (by text) a (fairly) large video file from my iPhone 4 without it getting compressed first?
    If that's not possible is there a way to uncompress the video file one it is received onto the receiving iPhone?
    Or is there a way to sync or transfer a video file from iPhone to iPhone?

    ExFAT would be the best file system format, as it will handle files greater than 4GB.
    If exFAT is not available, go for FAT32.
    Just FAT is too limiting, so avoid that

  • Error in generating a new XSL Transformer from large xslt File

    Good day to all,
    Currently I am facing a problem that whenever i try generating a Transformer object from TransformerFactory, I will have a TransformerConfigurationException threw. I have did some research from the net and understand that it is due to a bug that JVM memory limit of 64kb. However is there any external package or project that has already addressed to this problem? I have checked apache but they already patch the problem in Xalan 2.7.1. However I couldn't find any release of 2.7.1
    Please help
    Regards
    RollinMao

    If you have the transformation rules in a separate XSLT file, then, you can use com.icl.saxon package to get XML files transformed. I have used this package with large XSL files and has worked well.

  • Is there a way to import large XML files into HANA efficiently are their any data services provided to do this?

    1. Is there a way to import large XML files into HANA efficiently?
    2. Will it process it node by node or the entire file at a time?
    3. Are there any data services provided to do this?
    This for a project use case i also have an requirement to process bulk XML files, suggest me to accomplish this task

    Hi Patrick,
         I am addressing the similar issue. "Getting data from huge XMLs into Hana."
    Using Odata services can we handle huge data (i.e create schema/load into Hana) On-the-fly ?
    In my scenario,
    I get a folder of different complex XML files which are to be loaded into Hana database.
    Then I gotta transform & cleanse the data.
    Can I use oData services to transform and cleanse the data ?
    If so, how can I create oData services dynamically ?
    Any help is highly appreciated.
    Thank you.
    Regards,
    Alekhya

  • Copying large video files to PC-external HD

    I need to deliver a large Quicktime-file (5,87 Gb) edited in FCE to someone who will open it on his PC in Avid.
    It seems to be complicated:
    First problem is conversion - I need to convert my QT-file to a Windows Media file, right?
    This can be done with a conversion programme, i.e. Microsoft Expression Encoder that I have installed on a PC laptop.
    But how to move the 5,87 Gb file to the PC?
    I have an external HD formatted FAT32, but that only takes files up to 4Gb max.
    Is there really no way to copy a large mov.file on to an external hard drive that can be read by a PC?
    Tog o around the problem, I have tried to make an avi.file by exporting from FCE with Quicktime conversion - but the quality is really bad.
    Also making a smaller QT-file (that does not exceed 4 Gb limit on the HD) does not look good when compressed a second time in Expression to a Windows Media file.
    So, what to do?
    Can PC's read NFTS-formatted HD??  I guess not, I tried onc,e but maybe I didn't format it right.
    Sorry for all these questions in one go - hopefully there's a way to work around this annoying problem...

    For file transfer, here's what I suggest (sort of in order of recommendation):
    Use a network connection. Both computers (your Mac & your friend's pc) on the same network; turn on file sharing on your Mac, log the pc on to your Mac and copy the file(s).
    Use an FTP connection via the internet. You and your friend will both need an FTP client.
    Use an NTFS formatted external hard drive. All modern pcs use NTFS (Windows 2000/XP/Vista/7).  You will need a copy of NTFS for Mac (or similar utility) on your Mac to write to an NTFS formatted disk.
    Use a Mac OS Extended formatted external hard drive.  Your friend will need a copy of MacDrive (or similar utility) on his pc to read files from a Mac OS Extended formatted disk.
    Burn your 5.87GB file to a dual-layer DVD disk. Use the Finder to copy the file to DVD media & burn the disk.  Your Mac will need to be able to burn to dual-layer DVD media; and your friend's pc will need to have a DVD drive to read the disk.

Maybe you are looking for

  • ICloud doesn't like my email address?  HELP!!!

    I have an apple ID based on my email address, which happens to have a '-' in the domain name. I set up an icloud account using a different email address at the same domain. On the iPhone, I can't seem to set up a different apple ID for the cloud shar

  • Upgrade FP2.2.1 to FP2.4 single-node on WLS fails (FPCheckSOAServerStatus)

    Hi All, Issue: Upgrade from FP2.2.1 to FP2.4 Issue: java.io.FileNotFoundException: /apps/aiahome2/Infrastructure/install/scripts/FPCheckSOAServerStatus.xml (No such file or directory) We're trying to upgrade a single-node Development Environment from

  • Why can't I log into my iTunes account

    I am trying to watch a movie and can't  -I have enabled cookies and set Safari as default browser - zilch it sends me to a support page with no speciic info besides 'set cookies' - so I try to log into my iTunes account and the log and pass windo pop

  • Calling public class method  from the servlet dopost() implementation

    Hi! My application is a simple application where i wrote a JSP page to enter the USERNAME and PASSWORD. And this JSP will call a HttpServlet with in which i am calling another Java class ValidateUser which will check aginst the Oracle Database table

  • SAP BW 7.4 - Web results differ from RSRT (HTML) results

    Hey Guru´s We are running SAP 7.4 under SP7. When we execute queries in web environement (IE, Chrome, etc) the results do not match the data in the cube. Furthermore when running the same query under RSRT using the HTML option, the results are correc