Are Analytic Workspaces suitable for very large data sets?

Hi all,
I have made many different tests with analytic workspaces and i have used the different features (compression,composites...). The results especially for maintenance are disappointing.
I have a star schema with 6 dimensions. The fact table has 730 million rows, the first dimension has 2,9 million rows and the other 5 dimensions have between 25 and 300 rows each.
My conclusion is that Analytic Workspaces don't help in situations like mine. The time for maintenance is very very bad not to mention the time for aggregations. I even tried to populate the cube in parts( 90 million rows for the first population) but nothing change. And there are some other problems with storage and tablespaces ( I always get the message unable to extent TEMP tablespace. The size of it is 54Gb).
Is there something i missing? Has anyone similar problem or different opinion?
Thank you,
Ilias

A few other tips to add to Keith's excellent advice:
- How many CPU's does your server have? The answer to this may help you decide the optimal level to partition at (in my experience DAY is too low and can cause different problems). What other levels does your time dimension have? Are you loading your cubes in parallel?
- To speed up your load, partition your underlying fact table with the same granularity as your cubes and place an index on the field mapped to the partition dimension
- Are you using 10.2.0.3? If so, be very careful with the storage data type you choose when creating your cubes. The default in 10.2.0.3 is NUMBER which has the capability of storing data to 38 significant figures. This usually exceeds what is required for most datasets. If your dataset allows you to use storage of 15 significant figures then you should create your cubes using the DECIMAL data type instead. This will use about one third of the storage space and significantly increase your build speeds (in my experience, more than 3 times faster)
- Make sure you have preallocated enough permanent and temporary tablespaces for your build. Autoextending can be very time consuming.
- Consider reducing the amount of aggregation you do in batch. It should not be necessary to pre-aggregate everything in order to get good query performance.
Generally, I would say that the volume should not be a problem. A single dimension with 2.9 million values is fairly big and can be slow (in OLAP terms) to query but that should not be an obstacle to building it in the first place.
Good luck!
Stuart

Similar Messages

  • How can we suggest a new DBA OCE certification for very large databases?

    How can we suggest a new DBA OCE certification for very large databases?
    What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
    The largest databases that I have ever worked with barely over 1 Trillion Bytes.
    Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
    I could guess that maybe some of the following topics of how to configure might be on it,
    * Partitioning
    * parallel
    * bigger block size - DSS vs OLTP
    * etc
    Where could I send in a recommendation?
    Thanks Roger

    I wish there were some details about the OCE data warehousing.
    Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
    Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
    Overview of Data Warehousing
      Describe the benefits of a data warehouse
      Describe the technical characteristics of a data warehouse
      Describe the Oracle Database structures used primarily by a data warehouse
      Explain the use of materialized views
      Implement Database Resource Manager to control resource usage
      Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
    Parallelism
      Explain how the Oracle optimizer determines the degree of parallelism
      Configure parallelism
      Explain how parallelism and partitioning work together
    Partitioning
      Describe types of partitioning
      Describe the benefits of partitioning
      Implement partition-wise joins
    Result Cache
      Describe how the SQL Result Cache operates
      Identify the scenarios which benefit the most from Result Set Caching
    OLAP
      Explain how Oracle OLAP delivers high performance
      Describe how applications can access data stored in Oracle OLAP cubes
    Advanced Compression
      Explain the benefits provided by Advanced Compression
      Explain how Advanced Compression operates
      Describe how Advanced Compression interacts with other Oracle options and utilities
    Data integration
      Explain Oracle's overall approach to data integration
      Describe the benefits provided by ODI
      Differentiate the components of ODI
      Create integration data flows with ODI
      Ensure data quality with OWB
      Explain the concept and use of real-time data integration
      Describe the architecture of Oracle's data integration solutions
    Data mining and analysis
      Describe the components of Oracle's Data Mining option
      Describe the analytical functions provided by Oracle Data Mining
      Identify use cases that can benefit from Oracle Data Mining
      Identify which Oracle products use Oracle Data Mining
    Sizing
      Properly size all resources to be used in a data warehouse configuration
    Exadata
      Describe the architecture of the Sun Oracle Database Machine
      Describe configuration options for an Exadata Storage Server
      Explain the advantages provided by the Exadata Storage Server
    Best practices for performance
      Employ best practices to load incremental data into a data warehouse
      Employ best practices for using Oracle features to implement high performance data warehouses

  • Where to find Analytic Workspace Manager for Oracle9i?

    Hi,
    I am looking for Analytic Workspace Manager for Oracle 9i. Could someone point me to the place it can be downloaded? On Oracle OTN there are only versions 10g and 11g. I couldn't find it on Metalink either.
    Why I need so old version? We are about to upgrade our existing installation from 9i and I need it to see what's inside in our Oracle 9i database (9.2.0.6) before going into 10g.
    Or maybe it is possible to use AWM 10g against database 9i? This would solve my problems.
    Is it possible to upgrade analytic workspaces from oracle 9i into 10g with analytic workspace manager 10g?
    Best wishes
    Tomasz Michniewski

    Hello Laura,
    Well, I was looking for this 9i version of AWM, and I was even thinking that AWM does not exist for 9i. Especially that in 9i documentation AWM is not mentioned.
    But in the meantime I have found some install from some our backup cd-rom. It says:
    # Analytic Workspace Manager : 9.2.0.4.1
    # DATE: September 26, 2003
    # Platform Patch for : platform independent
    # Product Version # : 9.2.0.4.1
    # Product : Analytic Workspace Manager
    # Platforms
    # Analytic Workspace Manager uses a platform independent install and
    # has been approved on the following platforms:
    # - Windows NT 4.0, 2000 & XP
    # - Solaris 32 & 64-bit
    # - Linux 32-bit
    # - AIX 64-bit
    # - HP-UX 64-bit
    # - Tru64
    # Requirements
    # The Analytic Workspace Manager client requires the following Oracle9i
    # Database configuration:
    # - Oracle 9.2.0.1.0 Enterprise Edition Database
    # - RDBMS 9.2.0.4.0 patch set (PS# 3095277)
    # - OLAP 9.2.0.4.1 patch (PS# 3084634)
    So is it the AWM for 9i?
    Best wishes,
    Tomasz Michniewski

  • Grid Control Architecture for Very Large Sites: New Article published

    A new article on Grid Control was published recently:
    Grid Control Architecture for Very Large Sites
    http://www.oracle.com/technology/pub/articles/havewala-gridcontrol.html

    Oliver,
    Thanks for the comments. The article is based on practical experience. If one was to recommend a pool of 2 management servers for a large corporate with 1000 servers, what that would mean is that if 1 server was brought down for any maintenance reason (for eg. applying an EM patch), all the EM work load would be on the remaining management server. So it is better to have 3 management servers instead of 2 when the EM system is servicing so many targets. Otherwise, the DBAs would be a tad angry since only 1 remaining managment server would not be able to service them properly during the time of the maintainance fix on the first management server.
    The article ends with these words: "You can easily manage hundreds or even *thousands* of targets with such an architecture. The large corporate which had deployed this project scaled easily up to managing 600 to 700 targets with a pool of just three management servers, and the future plan is to manage *2,000 or more* targets which is quite achievable." The 2000 or more is based on the same architecture of 3 managment servers.
    So as per the best practice document, 2 management servers would be fine for 1000 servers, although I would still advise 3 servers in practice.
    For your case of 200 servers, it depends on the level of monitoring you are planning to do, and the type of database managment activities that
    will be done by the DBAs. For eg, if the Dbas are planning on creating standby databases now and then through Grid Control, running backups daily
    via Grid Control, cloning databases in Grid Control, patching databases in Grid Control and so on, I would definitely advise a pool of 2 servers
    in your case. 2 is always better than 1.
    Regards,
    Porus.
    Edited by: Porushh on Feb 21, 2009 12:51 AM

  • XML Solutions for Large Data Sets

    Hi,
    I'm working with a large data set (9 million records comprising 36 gigabytes) and am exploring the use of XML with it.
    I've experimented with a JDBC app (taken straight from Steve Muench's excellent <i>Oracle_XML_Applications</i>) for writing to CLOBS, but achieve throughputs of much less than 40k/s (the minimum speed required to process the data in < 10 days).
    What kind of throughputs are possible loading XML records from CLOBs into multiple tables (using server-side Java apps)?
    Could anyone comment whether XML is a feasible possibility for this size data set?
    Regards,
    Mike

    Just would like to identify myself (I'm the submitter):
    Michael Driscoll <[email protected]>.
    null

  • Large data sets and key terms

    Hello, I'm looking for some guidance on how BI can help me. I am a business analyst in a health solutions firm, but not proficient in SQL. However, I have to work with large data sets that just exceed the capabilities of Excel.
    Basically, I'm having to use Excel to manaully search for key terms and apply a values to those results. For instance, I have a medical claims file, with Provider Names, Tax ID, Charges, etc. It's 300,000 records long and 15-25 columsn wide. I need to search for key terms in the provider name like Ambulance, Fire Dept, Rescue, EMT, EMS, etc. Anything that resembles an ambulance service. Also, need to include abbreviations of them such as AMB, FD, or variations like EMT, E M T, EMS, E M S, etc. Each time I do a search, I have filter and apply an "N/A" flag.
    That's just one key term. I also have things like Dentists or DDS, Vision, Optomemtry and a dozen other Provider Types that need to be flagged as "N/A".
    Is this something that can be handled using BI? I have access to a BI group, but I need to understand more about the capabilities of what can be done. As an analyst, I'm having to deal with poor data inegrity. So, just cleaning up the file can be extremely taxing and cumbersome.
    Some insight would be very helpful. Thanks.

    I am not sure if you are looking for an explanation about different BI products? If so, may be this forum is not the place to get a straight answer.
    But, Information Discovery product suite might be useful in your case. Regarding the "large date set" you mentioned, searching and analyzing 300,000 records may not be considered a large data set at least in Endeca standards :).
    All your other requests, could also be very easily implemented using Endeca's product suite. Please reach out to Oracle's Endeca product team and they might guide you on how this product suite would help you.

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • 64-bit LabVIEW - still major problems with large data sets

    Hi Folks -
    I have LabVIEW 2009 64-bit version running on a Win7 64-bit OS with Intel Xeon dual quad core processor, 16 gbyte RAM.  With the release of this 64-bit version of LabVIEW, I expected to easily be able to handle x-ray computed tomography data sets in the 2 and 3-gbyte range in RAM since we now have access to all of the available RAM.  But I am having major problems - sluggish (and stoppage) operation of the program, inability to perform certain operations, etc.
    Here is how I store the 3-D data that consists of a series of images. I store each of my 2d images in a cluster, and then have the entire image series as an array of these clusters.  I then store this entire array of clusters in a queue which I regularly access using 'Preview Queue' and then operate on the image set, subsets of the images, or single images.
    Then enqueue:
    I remember talking to LabVIEW R&D years ago that this was a good way to do things because it allowed non-contiguous access to memory (versus contigous access that would be required if I stored my image series as 3-D array without the clusters) (R&D - this is what I remember, please correct if wrong).
    Because I am experiencing tremendous slowness in the program after these large data sets are loaded, and I think disk access as well to obtain memory beyond 16 gbytes, I am wondering if I need to use a different storage strategy that will allow seamless program operation while still using RAM storage (do not want to have to recall images from disk).
    I have other CT imaging programs that are running very well with these large data sets.
    This is a critical issue for me as I move forward with LabVIEW in this application.   I would like to work with LabVIEW R&D to solve this issue.  I am wondering if I should be thinking about establishing say, 10 queues, instead of 1, to address this.  It would mean a major program rewrite.
    Sincerely,
    Don

    First, I want to add that this strategy works reasonably well for data sets in the 600 - 700 mbyte range with the 64-bit LabVIEW. 
    With LabVIEW 32-bit, I00 - 200 mbyte sets were about the limit before I experienced problems.
    So I definitely noticed an improvement.
    I use the queuing strategy to move this large amount of data in RAM.   We could have used other means such a LV2 globals.  But the idea of clustering the 2-d array (image) and then having a series of those clustered arrays in an array (to see the final structure I showed in my diagram) versus using a 3-D array I believe even allowed me to get this far using RAM instead of recalling the images from disk.
    I am sure data copies are being made - yes, the memory is ballooning to 15 gbyte.  I probably need to have someone examine this code while I am explaining things to them live.  This is a very large application, and a significant amount of time would be required to simplify it, and that might not allow us to duplicate the problem.  In some of my applications, I use the in-place structure for indexing
    data out of arrays to minimize data copies.  I expect I might have to
    consider this strategy now here as well.  Just a thought.
    What I can do is send someone (in US) via large file transfer a 1.3 - 2.7 gbyte set of image data - and see how they would best advise on storing and extracting the images using RAM, how best to optimize the RAM usage, and not make data copies.  The operations that I apply on the images are irrelevant.  It is the storage, movement, and extractions that are causing the problems.  I can also show a screen shot(s) of how I extract the images (but I have major problems even before I get to that point),
    Can someone else comment on how data value references may help here, or how they have helped in one of their applications?  Would the use of this eliminate copies?   I currently have to wait for 64-bit version of the Advanced Signal Processing Toolkit for LabVIEW 2010 before I can move to LabVIEW 2010.
    Don

  • Working with Large data sets Waveforms

    When collection data at a high rate ( 30K ) and for a long period (120 seconds) I'm unable rearrange the data due to memory errors, is there a more efficient method?
    Attachments:
    Convert2Dto1D.vi ‏36 KB

    Some suggestions:
    Preallocate your final data before you start your calculations.  The build array you have in your loop will tend to fragment memory, giving you issues.
    Use the In Place Element to get data to/from your waveforms.  You can use it to get single waveforms from your 2D array and Y data from a waveform.
    Do not use the Transpose and autoindex.  It is adding a copy of data.
    Use the Array palette functions (e.g. Reshape Array) to change sizes of current data in place (if possible).
    You may want to read Managing Large Data Sets in LabVIEW.
    Your initial post is missing some information.  How many channels are you acquiring and what is the bit depth of each channel?  30kHz is a relatively slow acquisition rate for a single channel (NI sells instruments which acquire at 2GHz).  120s of data from said single channel is modestly large, but not huge.  If you have 100 channels, things change.  If you are acquiring them at 32-bit resolution, things change (although not as much).  Please post these parameters and we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Need to load large data set from Oracle table onto desktop using ODBC

    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.

    hillelhalevi wrote:
    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.
    Use Oracle's free Sql Developer
    http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
    You can just issue a query like this
    SELECT /*csv*/ * FROM SCOTT.EMP
    Then just save the results to a file
    See this article by Jeff Smith for other options
    http://www.thatjeffsmith.com/archive/2012/05/formatting-query-results-to-csv-in-oracle-sql-developer/

  • The command,"Dataloadred" is not working for a very large data file

    I have a file which size is 1.8Gb and it has 3 data channels including time channel. I tried reduced loading of the file using the command "Dataloadred" at the interval of 5. The below is the script.
    'start script----------------------------
    call Filenameget("Data","fileread")
    call DataLoadHdFile(Filedlgfile)
    call DataLoadRed("Filedlgfile","2-3",1,0,"First interval value","Start/Width/Number",10,453028984
    ,5,90605794,1)
    'end script-------------------------------------
    The following error message was displayed on the screen, which was resulted from the command, "Dataloadred()".
    "loading file (filename): Insufficient channels are available with the required channel length [3/90605794].
    To resolve this problem, I tried to allocate the channel length to 200M using the command " Chnalloc
    (). But this also resulted in the same kind error as above.
    How Can I resolve this problem and load my data by reducing. Your reply would be appreciate.
    Regards,
    Sky

    Hi,
    Please try this:
    1. Start DIAdem
    2. Open the "settings" menu
    3. Choose "Memory management..."
    4. Click the "Data matrix..." button
    5. In the dialog, set the "No. of channels" and "Channel length" to meet your requirements
    6. Click "Close"
    7. DIAdem will now restart and set up the data channels at the length you have selected
    Depending on how large you set the data matrix size, starting DIAdem may take a few minutes (depending on your computer equipment).
    An alternative to loading and reducing data sets may be the "Register File" function in the DATA window. Once you have clicked on the DATA icon, select the "File" menu and choose the "Register File..." option. Registering files will not actually load a data set, and thus speed up the data acce
    ss part of DIAdem. To learn more about this function, go to the help system and search for "Registering a file in DIAdem DATA".
    I hope this will help you. If you have any additional questions, please let me know.
    Otmar
    Otmar D. Foehner
    Business Development Manager
    DIAdem and Test Data Management
    National Instruments
    Austin, TX - USA
    "For an optimist the glass is half full, for a pessimist it's half empty, and for an engineer is twice bigger than necessary."

  • Help Using a webservice to report back very large data

    Hi All,
    I am in the process of creating a web service to report back lots of data according to some input params, I am using axis2 to test the soap message.
    Now the problem, the webservice will be collection data from the live database which is attached to a web application. It has high usage and I don't want to affect there service.
    Do you think it will affect there service and we would be better to report against a backup database or something?
    Also a client can request data on a certain user just a plain string ID in the wsdl file (data returned will be very large user history), now I was thinking would it be best to keep this as a single users input or create a array of stings with a max length (set in a xsd property)?
    My idea was it might be better that someone just implements another client with single ID and get data for each users, therefore the request could be called
    Customer * number of users
    Or they array XSD will allow them to report on say 10 at a time?

    Hi Steve
    It seems like you have alot of different versions of Crystal Reports and Business Objects products and you are getting them mixed up a bit. It can be confusing.
    So basically you want to access Crystal reports via the BO SDK and you want the reports to connect to web services.
    1. Creating the Report
    Since you have BOE XI R2 my guess is that you have a copy of Crystal Reports XI R2 or R1.
    Create the report with XI because it comes with a special XML Web Services data source driver. My guess is that you already have the web service created.
    2. Publish the report to BOE XI.
    Using the report designer publish the report to BOE (Save As).
    3. Write code to view the report.
    If you need to change the datasource at runtime then you will want to use the Report Application Server (RAS) SDK to do that. If you are only changing the data source to move from Dev/QA/Production then you may not want to do this task at runtime. In the CMC you should be able to change the data source for migration purposes.If you truely need to do it at runtime then you want to do RAS.
    Here's a sample to get you started.
    http://diamond.businessobjects.com/node/6197
    <a href="/blog/10">Rob&#39;s blog - http://diamond.businessobjects.com/robhorne</a>

  • Using SRM for very large contracts and contract management

    We are doing an SRM 7.01 implementation project. SRM will be used primarily for outsourced contract management. The contracts are all services associated with facilitites (plant) maintenance and also support services like cleaning or catering.
    They have very large numbers of items priced individually (e.g. 10,000) per contract. The items price depends on the location the work is expected to be performed. The location is represented in SAP RE-FX architectual object. The price can be priced at any level of the hierarchy e.g. service A is priced the same across the whole state but service B is priced per campus.
    q1. SAP advises that there are performance limitations on SRM contracts >2000 lines. Has anyone experience in a solution to provide very large contracts in SRM? How did you do it please?
    q2. SAP advises to use the plant to represent the location for pricing purposes, but this would result in a very large number of plants. Has any one experience in alternative solutions to for variable location pricing in SRM contracts please? I.e. like integrating the RE-FX architectural object or similar into contract and PO line items.
    thanks very much

    Hi Prakash,
    SRM does provide the facility of contract management with the help of Purchase Contracts and Global Outline Agreements but it is used as part of the sourcing for materials and services. The materials or services have contracts against some given target value against which PO is released. The contract is based on a material number ( eithe material or a service) which will be used as a source of supply during the creation of the Shopping Cart. It might not really fit in the scenario of carrier and freight forwarders but still can be customized for this kind of use.
    The contract management functionalities in the R/3 space can also be looked on for this purpose.
    Reg
    Sachin

  • Now I understand the reason for very large screen monitors

    I now understand why many people want to buy the new very large screen Macs. I have always loved my 17" flatscreen iMac G4 and since the picture itself is the same and the only thing that changes is the real estate around the picture, I always felt that getting a larger monitor would be an exercise in self-indulgence.
    Well, now I see that all of the surrounding real estate would be very useful for putting folders, documents, pictures, etc on the screen and having them visible and accessible. For someone making a webpage, and I imagine also for making iMovies, which I will soon do for my website, a 19" or larger screen would be VERY helpful. But when a person buys a new computer, then all of that information has to be transferred....... and that is enough to make a person stick with the 17" monitor (that and the price of a 21").
    — Lorna in Southern California

    Have you tried using Exposé to make that smaller
    screen more expansive? You can drag from one window
    to another with an Exposé transition in between.
    While not a replacement for a larger screen, it does
    help when I need it.
    Ken, I've had Tiger for about a week and the only things I've been working with are iWeb and iPhoto! Later I will explore Exposé. It sounds like they were trying to help us out and that's good.
    — Lorna in Southern California

  • Are Oracle Finanicals suitable for Insurance company????

    Hi,There,
    I'm working for an Insurance company, now
    we want to purchase accounting system,including
    AP,AR,Asset,GL,HRM,CRM,I know many companies
    in the manufacture or buying offices fields purchase
    the Oracle Financials, so I wonder if Oracle Financials
    are suitable for our company's daily operation? if
    Oracle Financials are suitable for our company?
    what's more, can we just purchase Financials modules?
    Pls revert me,thanks a lot!
    New Groupie for Oracle APP.

    I'm running it in production, and have run into some issues where I recommended to management to just spring the few hundred bucks for SE.
    The basic problem is it has the bugs of a first release.  For my system, a few have been minor, but there has been an extreme growth of the sysaux tablespace.  For me, it wound up being simplest to just make a gold backup and restore that whenever the problem manifests, but I can only do that because I can reload all new data.  I have to make more gold when DDL changes.
    I think the lack of bugfixes turns XE from a great idea to a bad one.  It's supposed to be appropriate for environments with no DBA, but that is just plain wrong.
    YMMV.

Maybe you are looking for

  • New ipod recognised by iphoto but not by itunes???? help!!

    Hi, I bought my girlfriend a 30gb ipod video for christmas but when i connected it to my ibook, iphoto loads up but itunes does not recognise it as an ipod. I ve tried the 5 Rs and have downloaded and installed all the recent software updates but sti

  • Clean installing mountain lion while keeping snow leopard?

    Hello, I have a question... I newly buy an SSD and at the same time I am thinking to upgrading to Mountain Lion... My operating system is snow leopard now. I am thinking to download Mountain Lion and make a bootable flash disk. Then cleanly install i

  • Why does my iPod Touch keep changing the date and year?

    Whenever I charge my iPod Touch after it completely dies, Safari never loads whatever site I'm trying to go on and the date always goes to Thursday, January 1st, 1970. I noticed the time always changes as well. i.e. it's 10:47am now and the time on t

  • App Server

    Hi Guys, Is there any advantage of having Web Logic server as Application server in OBIEE 10.1.3.4 ? How is it different to OC4J. I appreciate experts advice. Thanks

  • Screen design problem

    HI, I am creating screen through se38(abap code editor) there i write The code call screen 100. below that i write DATA ok_code like sy-ucomm. in flowlogic i write the code like this PROCESS BEFORE OUTPUT. MODULE STATUS_0100. PROCESS AFTER INPUT. MOD