Performance issue with big CSV files as data source

Hi,
We are creating crystal reports for a large banking corporation with CSV files as data source. For some reports, we need join 2 csv files. The problem we met now is that when the 2 csv files are very large (both >200M), the performance is very bad and it takes an hour or so to refresh the data in Crystal Reports designer. The same case for either CR 11.5 or CR 2008.
And my question is, is there any way to improve performance in such situations? For example, can we create index on the csv files? If you have ever created reports connecting to CSV, your suggestions will be highly appreciated.
Thanks,
Ray

Certainly a reasonable concern...
The question at this point is, How are the reports going to be used and deployed once they are in production?
I'd look at it from that direction.
For example... They may be able to dump the data directly to another database on a separate server that would insulate the main enterprise server. This would allow the main server to run the necessary queries during off peak hours and would isolate any reporting activity to a "reporting database".
This would also keep the data secure and encrypted (it would continue to enjoy the security provided by an RDBMS). Text & csv files can be copied, emailed, altered & deleted by anyone who sees them. Placing them in encrypted .zip folders prevents them from being read by external applications.
<Hope you liked the sales pitch I wrote for you to give to the client... =^)
If all else fails and you're stuck using the csv files, at least see if they can get it all on one file. Joining the 2 files is killing you performance wise... More so than using 1 massive file.
Jason

Similar Messages

  • 2.5 GB CSV file as data source for Crystal report

    Hi Experts,
        I  was asked to create a crystal report using crystal report as datasource(CSV file that is pretty huge (2.4Gb)). Could you help with me any doc that expalins the steps mainly with data connectivity.
    Objective is to create Crystal Report using that csv file as data source, save the report as .rpt with the data and send the results to customer to be read with Crystal Reports Viewer or save the results to PDF.
    Please help and suggest me steps as I am new to crystal reports and CSV as source.
    BR, Nanda Kishore

    Nanda,
    The issue of having some records with comma and some with a semi colon will need to be resolved before you can do an import. Assuming that there are no semi colons in any of the text values of the report, you could do a "Find & Replace" to convert the semi colons to commas.
    If find & replace isn't an option, you'll need to get the files separately.
    I've never used the Import Export Wizzard myself. I've always used the BULK INSERT command
    It would look something like this...
    BULK INSERT SQLServerTableName
    FROM 'c:\My_CSV_File.csv'
    WITH (FIELDTERMINATOR = ',')
    This of course implies that your table has the same columns, in the same order as the csv files and that each column is the correct data type to accept the incoming data.
    If you continue to have issues getting your data into SQL Server Express, please post in one of these two forums
    [Transact-SQL|http://social.msdn.microsoft.com/Forums/en-US/transactsql/threads]
    [SQL Server Express|http://social.msdn.microsoft.com/Forums/en-US/sqlexpress/threads]
    The Transact-SQL forum has some VERY knowledgeable people (including MVPs and book authors) posing answers.
    I've never posed to the SQL Server Express but I'm sure they can trouble shoot your issues with the Import Export Wizard.
    If you post in one of them, please copy the post link back to this thread you I can continue to to help.
    Jason

  • Advice needed: The way to solve out of memory problem (or the way to work with big csv files)

    Hello:)
    I'm in trouble: I have a big csv file (over 5gb of web-analytics data) and my 64 bit excel (and 6gb ram)
    I cant load file to data model because of it's size. There is an error "out of memory" in power query. 
    This is the first time when I encountered such a problem.
    What options do I have to work with such a file? To increase memory in my computer? Would it solve the problem? How much do I need to work with 6gb csv? 
    Or may be I can upload my data somewhere to azure and work with it there? 
    So the problem - is there any way to deal with big files using power query? Or I need to become a developer and learn sql or other languages? 
    Thanks in advance.
    Max

    Hi Miguel!
    Thanks for your answer. 
    I've tried to load this file on virtual pc from azure cloud with this config:
    I have increased memory limit in power query settings:
    And still, the proble is the same:
    What I do wrong? 

  • File upload - issue with European .csv file format

    All,
    when uploading the .csv file for "Due List for Planned Receipts" in the File Transfer Upload center I receive an error.  It appears that it is due to the european .csv file format that is delimited by semicolon rather than comma. The only way I could solve this issue is to change Regional and Language options to English. However, I don't think this is a great solution as I can't ask all our suppliers to change their settings.  Has anyone come across this issue and another way of solving it?
    Thank you!
    Have a good day,
    Johanna

    Igor thank you for your suggestion. 
    I found this SAP note:
    +If you download a file, and the formatting of the CSV file is faulty, it is possible that your column separator does not match the standard settings of the SAP system. In the standard SAP system, the separator is ,.
    To ensure that the formatting is correct, set your global default for column separation in your system, so that it matches that of the SAP system you are using.+
    To do that Microsoft suggests to change the "List separator" in the Regional and Language Options Customize view. Though like you suggest that does not seem to do the trick. Am I missing something?
    However, if I change the whole setting from say German to English (UK) the .csv files are comma delimited and can be easily uploaded.  Was hoping there would be another way of solving this without need for custom development.

  • Issue with processing .csv file

    Hi.
    I have a simple csv file (multiple rows) which needs to be picked up by the PI File adapter and then processed into a BAPi.
    I created a Data type 'Record' which has the column names. Then there is a message type using this particular data type MT_SourceOrder. This message type is mapped to BAPI_SALESORDER_CREATEFROMDAT2. I have done the configuration is the receiver file adapter as well to accept the .csv file
    Document Name: MT_SourceOrder
    Doc Namespace:....
    Recorset Name: Recordset
    Recordset Structure: Row,*
    Recordset Sequence: Ascending
    Recordsets per message: 1000
    Key Field Type: Ascending
    In the parameter section, the following information has been provided:
    Row.fieldNames     MerchantID,OrderNumber,OrderDate,Description,VendorSKU,MerchantSKU,Size,UnitPrice,UnitCost,ItemCount,ItemLineNumber,Quantity,Name,Address1,Address2,Address3,City,State,Country,Zip,Phone,ShipMethod,ServiceType,GiftMessage,Tax,Accountno,CustomerPO
    Row.fieldSeparator     ,
    Row.processConfiguration     FromConfiguration
    However, when the mapping is still not working correctly.
    Can anyone please help.
    This is very urgent and all help is very much appreciated.
    Thanks.
    Anuradha SenGupta.

    HI Santosh.
    I have verified the content in the source payload in SXMB_MONI.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:MT_SourceOrder xmlns:ns="http://xxx.com/LE/PIISalesOrder/">
    <Recordset>
         <Row>
              <MerchantID>PII</MerchantID>
              <OrderNumber>ORD-0504703</OrderNumber>
              <OrderDate>8/11/2011</OrderDate>
              <Description>&quot;Callaway 60&quot;&quot; Women&apos;s Umbrella&quot;</Description>
              <VendorSKU>4824S</VendorSKU>
              <MerchantSKU>CAL5909002</MerchantSKU>
              <Size></Size>
              <UnitPrice>25</UnitPrice>
              <UnitCost>25</UnitCost>
              <ItemCount>2</ItemCount>
              <ItemLineNumber>1</ItemLineNumber>
              <Quantity></Quantity>
              <Name>MARGARET A HAIGHT MARGARET A HAIGHT</Name>
              <Address1>15519 LOYALIST PKY</Address1>
              <Address2></Address2>
              <Address3></Address3>
              <City>BLOOMFIELD</City>
              <State>ON</State>
              <Country></Country>
              <Zip>K0K1G0</Zip>
              <Phone>(613)399-5615 x5615</Phone>
              <ShipMethod>Purolator</ShipMethod>
              <ServiceType>Standard</ServiceType>
              <GiftMessage>PI Holding this cost-</GiftMessage>
              <Tax></Tax>
              <Accountno>1217254</Accountno>
              <CustomerPO>CIBC00047297</CustomerPO>
         </Row>
    </Recordset>
    </ns:MT_SourceOrder>
    It looked as above. However, in the message mapping section it doesnt work - when i do Display Queue on the source field it keeps saying the value as NULL.
    Please advice.
    Thanks.
    Anuradha.

  • Issue with table ROOSPRMSF entries for data source 0FI_AP_4

    Hi Experts,
    I am facing with an issue where we found incosistencies with table ROOSPRMSF in R/3 system.
    In BW , we have done initializations based on fiscal period selections (none of the selections overlap) for data source 0FI_AP_4.
    We have done in total 7 initializations. So in BW system in table RSSDLINITSEL we have 7 initialization requests.
    But in R/3 system we have 49 records for data source 0FI_AP_4 in ROOSPRMSF table out of which 42 are invalid records.
    I suspect that these 42 invalid records are created due to the execution of program RSSM_OLTP_INIT_DELTA_UPDATE when the tables ROOSPRMSF are actually holding the 7 initialization request entries.   Due to this each and every initialization request is linked to rest of the other intialization requests and ended with 49 records in table ROOSPRMSF table.
    Now our data loads are running fine but daily a short dump is raised . In the daily loads, BW init records in RSSDLINITSEL are compared with ROOSPRMSF entries and all the other 42 records which are invalid are written into system log and a short dump is raised.
    In order to fix these inconsistencies i checked for OSS note 852443. (Point 3 in OSS note)
    But it is specified to delete the delta queue for data source 0FI_AP_4 in RSA7 and instructed to execute the program RSSM_OLTP_INIT_DELTA_UPDATE so that the ROOSPRMSF table will be reconstructed with valid records available in RSSDLINITSEL. 
    From OSS note 852443 point 3
    "3. If the RSSDLINIT table in the BW system already contains entries, check the requests listed there in the RNR column in the monitor (transaction RSRQ). Compare these entries with the entries in the ROOSPRMSF and ROOSPRMSC tables with the INITRNR field. If, in the ROOSPRMSF and ROOSPRMSC tables for your DataSource source system combination, there are more entries with different INITRNR numbers, use transaction RSA7 in an OLTP source system to delete all entries and then use the RSSM_OLTP_INIT_DELTA_UPDATE report mentioned in the next section. For a DataMart source system, delete the entries that you cannot find in the RSSDLINIT table using the procedure described above."
    My question is if we delete the delta queue in RSA7 then all the tables in R/3 (ROOSPRMSF, ROOSPRMSC, Time stamp table) and BW (RSSDLINITSEL, initialization requests will be deleted) will be cleared. Then how will the program RSSM_OLTP_INIT_DELTA_UPDATE  copy entries into ROOSPRMSF table in R/3 ?
    Could any one please clarify this ?
    Thanks
    Regards,
    Jeswanth

    Hi Amarnath,
    Did you unhide the new field in RSA6 and regenerated the DataSource?
    Often SAP will populate newly added fields (belonging to the same (set) of table(s) used for extraction) automatically (e.g. SAP uses 'move-corresponding' in it's extractor-code, or, in this case, reading all fields from the DD, FM BWFIU_TRANSFORM_FIELDLIST).
    If the DataSource looks fine to you and the field is still not populated in RSA3 you can't go without a user-exit.
    Grtx,
    Marco

  • Hard drive performance issue with DPX & MXF files

    Hi everyone,
    I spent hours on this forum reading articles about hardware to use for editing large files. I also have to admit, that I made some cut backs due to budge limitations.
    I spent most of the money on CPU, motherboard, & ram (6 core  32 gigs , hell of nice MB)  ( Video card= gtx 760 2 gig  )So ok,  I cheaped out on the hard drives. (I think)
    I put (2) 7200 rpm  drives (raid 0) as my Operating  system
    (1) External  8TB RAID  G drive (USB 3.0 ) as my Media drive. (footage)
    (2) 7200 drives (Raid 0)  as my Cache Drive
    I then opened up an OLD project using canon 5D footage (h264 mov) and some DPX sequence files.
    The 5D fooatge played in real time BUT as soon as it ran into the DPX sequence, the frames dropped.
    I wanted to cry. I just spent 1,600.00 on  parts (&3000.00 pc retail if I bought it instead of building it)
    And I got the same performance as my old quad core 6600/ 6 ram system. WHAT GIVES????
    So, can someone recommend the cheapest workaround that can get me editing in realtime?
    Do you think it's the 3.0 usb connection?
    The footage I will be editing will be from the canon c300 (MXF files)
    My fear is, if I am dropping frames with DPX file, what's going to happen when I get the MXF files?
    Fun Facts:
    I tested my OS drives, cache & media,and I was getting 290-301mb/sec  READ SPEEDS
    Thanks in advance !

    Loucann,
    I do not have the data on a three drive RAID 0 but here is a two drive RAID 0 with good Seagate ST2000DM001 drives.  I have to say "good" because there have been two varieties available, some with two platters and some with three platters.  The preferable are with two platters and my experience shows these drive to have an "E" in the third character of the Serial Number like Z1E1ME4V  They are slightly faster than the three platter versions.  Here is the HD Tune Read of the two better ST2000DM001 drives in RAID 0 on my system.
    The best connection is to have all three on a RAIDable Intel SATA III connection but I am doubtfull if this can be accomplished on many motherboards, maybe Eric can comment and enlighten us.
    Message was edited by: Bill Gehrke
    What motherboard do you have?  I guess from the info above it is a socket 2011.

  • Performance issues with Planning data load & Agg in 11.1.2.3.500

    We recently upgraded from 11.1.1.3 to 11.1.2.3. Post upgrade we face performance issues with one of our Planning job (eg: Job E). It takes 3x the time to complete in our new environment (11.1.2.3) when compared to the old one (11.1.1.3). This job loads then actual data and does the aggregation. The pattern which we noticed is , if we run a restructure on the application and execute this job immediately it gets completed as the same time as 11.1.1.3. However, in current production (11.1.1.3) the job runs in the following sequence Job A->Job B-> Job C->Job D->Job E and it completes on time, but if we do the same test in 11.1.2.3 in the above sequence it take 3x the time . We dont have a window to restructure the application to before running Job E  every time in Prod. Specs of the new Env is much higher than the old one.
    We have Essbase clustering (MS active/passive) in the new environment and the files are stored in the SAN drive. Could this be because of this? has any one faced performance issues in the clustered environment?

    Do you have exactly same Essbase config settings and calculations performing AGG ? . Remember something very small like UPDATECALC ON/OFF can make a BIG difference in timing..

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance issue with FDM when importing data

    In the FDM Web console, a performance issue has been detected when importing data (.txt)
    In less than 10 seconds the ".txt" and the ".log" files are created the INBOX folder (the ".txt" file) and in the OUTBOX\Logs (the ".log" file).
    At that moment, system shows the message "Processing, please wait” during 10 minutes. Eventually the information is displayed, however if we want to see the second page, we have to wait more than 20 seconds.
    It seems a performance issue when system tries to show the imported data in the web page.
    It has been also noted that when a user tries to import a txt file directly clicking on the tab "Select File From Inbox", the user has to also wait other 10 minutes before the information is displayed on the web page.
    Thx in advance!
    Cheers
    Matteo

    Hi Matteo
    How much data is being imported / displayed when users are interacting with the system.
    There is a report that may help you to analyse this but unfortunately I cannot remember what it is called and don't have access to a system to check. I do remember that it breaks down the import process into stages showing how long it takes to process each mapping step and the overall time.
    I suspect that what you are seeing is normal behaviour but that isn't to say that performance improvements are not possible.
    The copying of files is the first part of the import process before FDM then starts the import so that will be quick. The processing is then the time taken to import the records, process the mapping and write to the tables. If users are clicking 'Select file from Inbox' then they are re-importing so it will take just as long as it would for you to import it, they are not just asking to retrieve previously imported data.
    Hope this helps
    Stuart

  • Anyone having issues with importing CR2 files into lightroom 5 as error message comes up saying "Some import operations were not performed". please advise what is a solution please

    Urgent please
    anyone having issues with importing CR2 files into lightroom 5 as error message comes up saying "Some import operations were not performed". please advise what is a solution please

    Sounds like the folder Write permissions issue described here with a solution:
    "Some import operations were not performed" from camera import

  • Performance Issues with large XML (1-1.5MB) files

    Hi,
    I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
    When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
    I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
    I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
    I guess I'm running out of options and patience as well.;)
    I would appreciate any ideas/suggestions, please help.....
    Thanks;
    Ramakrishna Chinta

    Are you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0

  • Performance issues with Homesharing?

    I have a Time Capsule as the base station for my wireless network, then 2 Airport Express setup to extend the network around the house, an iMac i7 as the main iTunes Library and couple of iPads, and a couple of Apple TVs. Everything has the latest software, but I have several performance issues with Home sharing. I've done several tests making sure nothing is taking additional bandwidth, so here are the list of issues:
    1) With nothing else running, trying playing a movie via home sharing in an iPad 2 which is located on my iMac, it stops and I have to keep pressing the play button over and over again. I typically see that the iPad tries to download part of the movie first and then starts playing so that it deals with the bandwidth, but in many cases it doesn't.
    2) When trying to play any iTunes content (movies, music, photos, etc) from my Apple TV I can see my computer library, but when I go in on any of the menus, it says there's no content. I have to reboot the Apple TV and then problem fixed. I's just annoying that I have to reboot.
    3) When watching a Netflix movie on my iPad and with Airplay I send the sound to some speakers via Airplay through an Airport Express. At time I lose the connection to the speakers.
    I've complained about Wifi's instability, but here I tried to keep everything with Apples products to avoid any compatibility issues and stay within N wireless technology, which I understood it was much more stable.
    Has anyone some suggestions?

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • Performance issues with SAP BPC 7.0/7.5 (SP06, 07, 08) NW

    Hi Experts
    There are some performance issues with SAP BPC 7.5/7.0 NW, users are saying they are not getting data or there are some issues while getting data from R/3 system or ECC 6.0. Then what things do I need to consider to check, such as what DataSources or Cubes I need to check? So, how to solve this issue?
    What things I need to consider for SAP NW BI 7.0 u2013 SAP BPC 7.5 NW (SP06, 07, 08) Implementation?
    Your help is greatly appreciated.
    Regards,
    Qadeer

    Hi,
    New  SP was released in February, and now most of the new bugs should been caught ,This has a Central Note. For SP06 it's Note 1527325 - Planning and Consolidation 7.5 SP06 NetWeaver Central Note to fix any issues. Most of the improvements in SP06 were related to performance, especially when logging on from the BPC clients.There you should be able to find a big list of fixes/improvements and Notes that describe those. Some of the Notes even have test description how to reproduce that issue in the old version.
    hope this will help you
    Regards
    Rv

Maybe you are looking for

  • Data stored by month, needs chnaging to store by day in cube.

    Hello All, I am reporting on HR data.  Data is stored by month in the cube.  So for a given year an employee is likely to have 12 records in the cube (if there where no changes to the record).  A request has come in to store data by day (so an employ

  • Premiere CS4 with XDCAM files in MOV wrapper

    Hi. I am using a JVC HM700 camera. This camera shoots in 35 mbit XDCAM EX format.  The XDCAM footage is "wrapped" in a MOV file . To have an MP4 (Sony standard wrapper) I need to plug an SxS recorder on the back that activates the codc/wrapper option

  • Missing pdf viewer add in

    Updated to Yosemite recently and I know this features worked since the update... but of the life of me, I can't view or download pdf files in safari. I looked and pdf viewer file is missing from the plug ins subdirectory. Restarting the machine has f

  • No picture or sound on downloaded video from iTunes

    I recently purchased a video from the iTunes store and I searched the forums for nearly an hour and couldn't find anything that helped. The file was rather large (2.18 GB). When it had "finished" it had an error mesage next to it "Network Connection

  • My I pod will not send a verification to my email

    Why will my Ipod not send a verification to my email