Reg "Allow Bulk Data Load"
Hi all,
GoodMorning,.
what exactly does the option of "Allow Bulk Data Load" option on Company Profile page do, it is clear in doc. that it allows crm on demand consultants to load bulk data. But i am not clear on how they load etc etc, do they use anyother tools other than that admin. uses for data uploading.
any real time implementation example using this option would be appreciated.
Regards,
Sreekanth.
The Bulk Data Load utility is a utility similar to the Import Utility that On Demand Professional Services can use for import. The Bulk Data Load utility is accessed from a separate URL and once a company has allowed bulk data load then we would be able to use the Bulk Data Load Utility for importing their data.
The Bulk Data Load uses similar method to the Import Utility for importing data with the difference being that the number of records per import is higher and you can queue multiple import jobs.
Similar Messages
-
How to improve performance for bulk data load in Dynamics CRM 2013 Online
Hi all,
We need to bulk update (or create) contacts into Dynamics CRM 2013 online every night due to data updated from another external data source. The data size is around 100,000 and the data loading duration was around 6 hours.
We are already using ExecuteMultiple web services to handle the integration, however, the 6 hours integraton duration is still not acceptable and we are seeking for any advise for further improvement.
Any help is highly appreciated. Many thanks.
GaryI think Andrii's referring to running multiple threads in parallel (see
http://www.mscrmuk.blogspot.co.uk/2012/02/data-migration-performance-to-crm.html - it's a bit dated, but should still be relevant).
Microsoft do have some throttling limits applied in Crm Online, and it is worth contacting them to see if you can get those raised.
100 000 records per night seems a large number. Are all these records new or updated records, or are there some that are unchanged, in which case you could filter them out before uploading ? Or are there useful ways to summarise the data before loading
Microsoft CRM MVP - http://mscrmuk.blogspot.com/ http://www.excitation.co.uk -
In order to load data into a 10.2 database from another Oracle database (8.1), what are the options available? Sqlldr , create table as select * from ) , export & import
Any pros vs cons ?Troll35 wrote:
Hello,
I've created db link between 10g and 8i without any problem, the problem is between 11g an 8i.It's very release level specific and there could be unknown problems in non-supported links between new and old versions.
Typically it's best to consider that database links are only supported back to the previous version of an oracle database i.e. 10g to 9i or 11g to 10g, but any wider span that that may have issues.
The matrix is quite complex for it, and I think you can only get the official one through oracle support (metalink as it was), though it seems there's a version on the following website...
http://www.myoraclesupports.com/content/client-server-interoperability-support-between-different-oracle-versions -
How to insert bulk data into ms-access
Hi,
I am trying to insert bulk data into ms-access. I used Statement it is
working fine but not allowing to insert single quote. Then I tryed with
PreparedStatement which is allowing single quote but not allowing bulk data. The following error i am getting.
javax.servlet.ServletException: [Microsoft][ODBC Microsoft Access Driver]String data, right truncated (null)
please help me..
guruhave u tried out the memo datatype in access?
-
Is HCM Data Loader functional on fusion HCM release 8? or will it be on release 9?
I understand that the long-term plan for data loading is to provide a single tool, which is referred to as HCM Data Loader.
HCM Data Loader is intended to provide the standard data-import solution and a single entry point for managing bulk data loading to Oracle Fusion HCM.
What is the task to use this tool? (document with explanations)
Also can someone explain (in short) when to use each one the functions currently delivered by HCM File-Based Loader, HCM Spreadsheet Data Loader,
and some of the specialized data loaders and the tasks needed to get to this tools.I have no knowledge of HCM feature availability, however this seems duplicate of this support forum thread. Adding the link here in case someone else is looking for the same information.
Jani Rautiainen
Fusion Applications Developer Relations
https://blogs.oracle.com/fadevrel/ -
Automate data loads from SPM UI
Hi Experts
We have recently implemented SPM, and are doing Data Loads Manually.
Is there a way of doing automatic data loads and release, and we only get involved if there is a warning or error ?
Please guide
Regards
PankajHi Pankaj,
It is possible to automate the data loads into SPM.
A number of customers have customized this into their solution as part of the implementation.
We are planning to deliver OOTB functionality that allows for data load automation. It is currently being validated.
Please email me directly [email protected] so we can have a call to discuss the other questions that you have posted.
Kind regads,
Michael -
Does the imac allow me to load data from other brands
Does the Apple imac desktop allow me to load data from other brand name mobile units?
Check out dropbox.
https://www.dropbox.com/mobile
https://www.dropbox.com
Check if mobile device supports OS X.
RObert -
Reg data loading into essbase using text files
Can we load data in parallel from 2 files into the same cube using 2 different rules files? Or do we have to load one file at a time?
Could someone clarify this?I do not believe that by selecting two data files and two load rules in AAS you are getting parrallel data loading. If you look at the log, you will find them to be sequential. For ASO cubes, AAS loads the data into a buffer then applies it. The only real parrallel data loading is using multiple threads for one file. Othere than that is it sequential
-
Insert OR Update with Data Loader?
Hello,
can i Insert OR Update at same time with Data Loader?
How can i do this?
Thanks.The GUI loader wizard does allow for this including automatically adding values to the PICKLIST fields.
However, if you mean the command line bulk loader, the answer is no. And to compound the problem, the command line version will actually create duplicates for some of the objects. It appears that the "External Unique Id" is not really "unique" (as in constrained via unique index) for some of the objects. So be very careful when you prototype something with the GUI loader and then reuse the map on the command line version.
You will find that some objects can work well with the command line loader (some objects will not).
Works well (just a few examples):
Account (assuming your NAME,LOCATION fields are unique).
Financial Product
Financial Account
Financial Transaction
Will definitely create duplicates via command line bulk loader:
Contact
Asset
Also be aware that you might hear that during a go-live that Oracle will remove the 30k record limit on bulks loads (temporarily). I have not had any luck with Oracle Support making that change (2 clients specifically in the last 12 months). -
Statistic on throughput of data loader utility
Hi All
Can you guys share some statistics on throughput of data loader utility ? If you are looking for number of records you may consider 1 Million, how long it would take to import this ?.
I need these number to make a call on using Web Service or Data loader utility. Any suggestion is appreciated.
Thank you.It really depends on the object and the amount of data in there (both the number of fields you are mapping, and how much data is in the table).
For example…
One of my clients has over 1.2M Accounts. It takes about 3 hours (multi-tenant) to INSERT 28k new customers. But when we were first doing it, it was sub-1hour. Because the bulk loader is limited on the record count (most objects are limited to 30k records in the input file), you will need to break up your file accordingly.
But strangely, the “Financial Account” object (not normally exposed in the standard CRMOD), we can insert 30k records in about 30 min (and there are over 1M rows in that table). Part of this is probably due to the number of fields on the account and the address itself (remember it is a separate table in the underlying DB, even though it looks like there are two address sets of fields on the account).
The bulk loader and the wizard are roughly the same. However, the command line approach doesn’t allow for simultaneously INSERT/UPDATE (there are little tricks around this; depends how you might prepare the extract files from your other system... UPDATE file and a INSERT file, some systems aren't able to extract this due to the way they are built).
Some objects you should be very careful with because the way the indexes are built. For example, ASSET and CONTACT both will create duplicates even when you have an “External Unique Id”. For those, we use web services. You aren’t limited to a file size there. I think (same client) we have over 800k ASSETS and 1.5M CONTACTS.
The ASSET load (via webservice which does both INSERT and UPDATE) typically can insert about 40k records in about 6 hours.
The CONTACT load (via webservice which does both INSERT and UPDATE) typically can insert about 40k records in about 10 hours.
Your best shot is to do some timings via the import wizard and do a little linear time increase as you increase the data size sitting in the tables.
My company (Hitachi Consulting) can help build these things (both automated bulk loaders and web services) if you are interested due to limited resource bandwidth or other factors. -
Announcing 3 new Data Loader resources
There are three new Data Loader resources available to customers and partners.
• Command Line Basics for Oracle Data Loader On Demand (for Windows) - This two-page guide (PDF) shows command line functions specifc to Data Loader.
• Writing a Properties File to Import Accounts - This 6-minute Webinar shows you how to write a properties file to import accounts using the Data Loader client. You'll also learn how to use the properties file to store parameters, and to use the command line to reference the properties file, thereby creating a reusable library of files to import or overwrite numerous record types.
• Writing a Batch File to Schedule a Contact Import - This 7-minute Webinar shows you how to write a batch file to schedule a contact import using the Data Loader client. You'll also learn how to reference the properties file.
You can find these on the Data Import Resources page, on the Training and Support Center.
• Click the Learn More tab> Popular Resources> What's New> Data Import Resources
or
• Simply search for "data import resources".
You can also find the Data Import Resources page on My Oracle Support (ID 1085694.1).Unfortunately, I don't believe that approach will work.
We use a similar mechanism for some loads (using the bulk loader instead of web services) for the objects that have a large qty of daily records).
There is a technique (though messy) that works fine. Since Oracle does not allow the "queueing up" of objects of the same type (you have to wait for "account" to finish before you load the next "account" file), you can monitor the .LOG file to get the SBL 0363 error (which means you can submit another file yet (typically meaning one already exists).
By monitoring for this error code in the log, you can sleep your process, then try again in a preset amount of time.
We use this allow for an UPDATE, followed by an INSERT on the account... and then a similar technique so "dependent" objects have to wait for the prime object to finish processing.
PS... Normal windows .BAT scripts aren't sophisticated enough to handle this. I would recommend either Windows POWERSHELL or C/Korn/Borne shell scripts in Unix.
I hope that helps some. -
CALL_FUNCTION_CONFLICT_TYPE Standard Data loading
Hi,
I am facing a data loading problem using Business content on CPS_DATE infocube (0PS_DAT_MLS datasource).
The R/3 extraction processes without any error, but the problem occurs in the update rules while updating the milestone date. Please find hereunder the log from the ST22.
The real weird thing is that the process works perfectly in development environment and not in integration one (the patch levels are strongly the same: BW 3.5 Patch #16).
I apologise for the long message below... this is a part of the system log.
For information the routine_0004 is a standard one.
Thanks a lot in advanced!
Cheers.
CALL_FUNCTION_CONFLICT_TYPE
Except. CX_SY_DYN_CALL_ILLEGAL_TYPE
Symptoms. Type conflict when calling a function module
Causes Error in ABAP application program.
The current ABAP program "GP420EQ35FHFOCVEBCR6RWPVQBR" had to be terminated because one of the statements could not be executed.
This is probably due to an error in the ABAP program.
A function module was called incorrectly.
Errors analysis
An exception occurred. This exception is dealt with in more detail below
. The exception, which is assigned to the class 'CX_SY_DYN_CALL_ILLEGAL_TYPE', was neither caught nor passed along using a RAISING clause, in the procedure "ROUTINE_0004"
"(FORM)" .
Since the caller of the procedure could not have expected this exception
to occur, the running program was terminated. The reason for the exception is:
The call to the function module "RS_BCT_TIMCONV_PS_CONV" is incorrect:
The function module interface allows you to specify only fields of a particular type under "E_FISCPER".
The field "RESULT" specified here is a different field type.
How to correct the error.
You may able to find an interim solution to the problem in the SAP note system. If you have access to the note system yourself, use the following search criteria:
"CALL_FUNCTION_CONFLICT_TYPE" CX_SY_DYN_CALL_ILLEGAL_TYPEC
"GP420EQ35FHFOCVEBCR6RWPVQBR" or "GP420EQ35FHFOCVEBCR6RWPVQBR"
"ROUTINE_0004"
If you cannot solve the problem yourself and you wish to send
an error message to SAP, include the following documents:
1. A printout of the problem description (short dump)
To obtain this, select in the current display "System->List->
Save->Local File (unconverted)". 2. A suitable printout of the system log To obtain this, call the system log through transaction SM21. Limit the time interval to 10 minutes before and 5 minutes after the short dump. In the display, then select the function
"System->List->Save->Local File (unconverted)".
3. If the programs are your own programs or modified SAP programs, supply the source code.
To do this, select the Editor function "Further Utilities-> Upload/Download->Download".
4. Details regarding the conditions under which the error occurred
or which actions and input led to the error.
The exception must either be prevented, caught within the procedure
"ROUTINE_0004"
"(FORM)", or declared in the procedure's RAISING clause.
To prevent the exception, note the following:
Environment system SAP Release.............. "640"
Operating system......... "SunOS" Release.................. "5.9"
Hardware type............ "sun4u"
Character length......... 8 Bits
Pointer length........... 64 Bits
Work process number...... 2
Short dump setting....... "full"
Database type............ "ORACLE"
Database name............ "BWI"
Database owner........... "SAPTB1"
Character set............ "fr"
SAP kernel............... "640"
Created on............... "Jan 15 2006 21:42:36" Created in............... "SunOS 5.8 Generic_108528-16 sun4u"
Database version......... "OCI_920 "
Patch level.............. "109"
Patch text............... " "
Supported environment....
Database................. "ORACLE 9.2.0.., ORACLE 10.1.0.., ORACLE 10.2.0.."
SAP database version..... "640"
Operating system......... "SunOS 5.8, SunOS 5.9, SunOS 5.10"
SAP Release.............. "640"
The termination occurred in the ABAP program "GP420EQ35FHFOCVEBCR6RWPVQBR" in
"ROUTINE_0004".
The main program was "RSMO1_RSM2 ".
The termination occurred in line 702 of the source code of the (Include)
program "GP420EQ35FHFOCVEBCR6RWPVQBR"
of the source code of program "GP420EQ35FHFOCVEBCR6RWPVQBR" (when calling the editor 7020).
Processing was terminated because the exception "CX_SY_DYN_CALL_ILLEGAL_TYPE" occurred in the procedure "ROUTINE_0004" "(FORM)" but was not handled locally, not declared in the RAISING clause of the procedure.
The procedure is in the program "GP420EQ35FHFOCVEBCR6RWPVQBR ". Its source code starts in line 685 of the (Include) program "GP420EQ35FHFOCVEBCR6RWPVQBR ".
672 'ROUTINE_0003' g_s_is-recno
673 rs_c_false rs_c_false g_s_is-recno
674 changing c_abort.
675 catch cx_foev_error_in_function.
676 perform error_message using 'RSAU' 'E' '510'
677 'ROUTINE_0003' g_s_is-recno
678 rs_c_false rs_c_false g_s_is-recno
679 changing c_abort.
680 endtry.
681 endform.
682 ************************************************************************
683 * routine no.: 0004
684 ************************************************************************
685 form routine_0004
686 changing
687 result type g_s_hashed_cube-FISCPER3
688 returncode like sy-subrc
689 c_t_idocstate type rsarr_t_idocstate
690 c_subrc like sy-subrc
691 c_abort like sy-subrc. "#EC *
692 data:
693 l_t_rsmondata like rsmonview occurs 0 with header line. "#EC *
694
695 try.
696 * init
variables
697 move-corresponding g_s_is to comm_structure.
698
699 * fill the internal table "MONITOR", to make monitor entries
700
701 * result value of the routine
>>>> CALL FUNCTION 'RS_BCT_TIMCONV_PS_CONV'
703 EXPORTING
704 I_TIMNM_FROM = '0CALDAY'
705 I_TIMNM_TO = '0FISCPER'
706 I_TIMVL = COMM_STRUCTURE-CALDAY
707 I_FISCVARNT = gd_fiscvarnt
708 IMPORTING
709 E_FISCPER = RESULT.
710 * if the returncode is not equal zero, the result will not be updated
711 RETURNCODE = 0.
712 * if abort is not equal zero, the update process will be canceled
713 ABORT = 0.
714
715 catch cx_sy_conversion_error
716 cx_sy_arithmetic_error.
717 perform error_message using 'RSAU' 'E' '507'
718 'ROUTINE_0004' g_s_is-recno
719 rs_c_false rs_c_false g_s_is-recno
720 changing c_abort.
721 catch cx_foev_error_in_function.
System zones content
Name Val.
SY-SUBRC 0
SY-INDEX 2
SY-TABIX 0
SY-DBCNT 0
SY-FDPOS 65
SY-LSIND 0
SY-PAGNO 0
SY-LINNO 1
SY-COLNO 1
SY-PFKEY 0400
SY-UCOMM OK
SY-TITLE Moniteur - Atelier d'administration
SY-MSGTY E
SY-MSGID RSAU
SY-MSGNO 583
SY-MSGV1 BATVC 0000000000
SY-MSGV2 0PROJECT
SY-MSGV3
SY-MSGV4
Selected variables
Nº 23 Tpe FORM
Name ROUTINE_0004
GD_FISCVARNT
22
00 RS_C_INFO I
4
9
COMM_STRUCTURE-CALDAY
20060303
33333333
20060303
SYST-REPID GP420EQ35FHFOCVEBCR6RWPVQBR 4533345334444454445355555452222222222222 704205135686F365232627061220000000000000
RESULT
000
333
00You have an update routine in which youar callin FM 'RS_BCT_TIMCONV_PS_CONV'. Parameter e_fiscper must be the same that type of the variable you use (you can see the data tyoe in FM definition, transaction se37). You should do somethin like the following.
DATA: var type <the same that e_fiscper in FM definition>
CALL FUNCTION 'RS_BCT_TIMCONV_PS_CONV'
EXPORTING
I_TIMNM_FROM = '0CALDAY'
I_TIMNM_TO = '0FISCPER'
I_TIMVL = COMM_STRUCTURE-CALDAY
I_FISCVARNT = gd_fiscvarnt
IMPORTING
E_FISCPER = var.
result = var.
--- ASSIGN POINTS IS USEFUL. -
QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES
WHAT ARE QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
WHAT ARE DATALOADING PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
WILL REWARD FULL POINT S
REGARDS
GURUBW Back end
Some Tips -
1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 Background Processing Job Management to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 ABAP/4 Run-time Analysis and then run the analysis for the transaction code RSA3 Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW BW IMG Menu on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
Hope it Helps
Chetan
@CP.. -
Material Master Data Loading Error
Hi All,
I am loading master data for material. i got the following error.
1. 0material_attr :
Record 1 :0MATERIAL : Data record 1 ('000000000000010220 '): Version '000000000000010220 ' is not valid
Process : If this message appears during a data load, maintain the attribute in
the PSA maintenance screens. If this message appears in the master data
maintenance screens, leave the transaction and call it again. This
2.0material_text : Data record 1 ('000000000000010220E '): Version '000000000000010220 ' is not valid
Process : Same as above.
I have applied RSKC tcode in BI with ALL_CAPITAL, ALL_CAPITAL_PLUS_HEX, and one special character string. but still getting the same error.
Regards,
Komik ShahHi All,
Thanks for the reply. i have solved most of errors. but there are some errors like this
Diagnosis
Data record 261 & with the key '000000000000132559 &' is invalid in
value '10x8x5 &' of the attribute/characteristic 0SIZE_DIM &.
System Response
The system has recognized that the value mentioned above is invalid, and
has processed this general error message. A subsequent message may give
you more information on the error. This message refers to the same
value, even though it does not state this explicitly.
Procedure
If this message appears during a data load, maintain the attribute in
the PSA maintenance screens. If this message appears in the master data
maintenance screens, leave the transaction and call it again. This allows to maintain your masterdata
which string i should have to add in RSKC to remove this error ? i have already added the ALL_CAPITAL, ALL_CAPITAL_PLUS_HEX.
I have assigned points.
Regards,
Komik Shah -
Unable to load CSV data using APEX Data Load using Firefox/Safari on a MAC
I have APEX installed on a Windows XP machine connected to an 11g database on the same Windows XP machine.
While on the windows XP, using IE 7, I am able to successfully load a CSV spreadsheet of data using the APEX Data Load utility.
However, if I switch to my MacBook Pro running OS X leopard, then login into same APEX machine using Firefox 2 or 3 or Safari 3, then try to upload CSV data, it fails on the "Table Properties" step when it asks you for the name of the new table and then asks you to set table properties, the table properties just never appear (they do appear in IE 7 on Windows XP) and if you try to hit the NEXT button, you get error message: "1 error has occurred. At least one column must be specified to include in new table." and of course, you can't specify the any of the columns because there is nothing under SET TABLE PROPERTIES in the interface.
I also tried to load data with Firefox 2, Firefox 3 (beta), and Safari 3.1, but get same failed result on all three. If I return to the Windows XP machine and use IE 7.0, Data Load works just fine. I work in an ALL MAC environment, it was difficult to get a windows machine into my workplace, and all my end users will be using MACs. There is no current version of IE for the MAC, so I have to use Firefox or Safari.
Is there some option in Firefox or Safari that I can turn on so this Data Load feature will work on the MAC?
Thanks for your help. Any assistance appreciated.
TonyI managed to get this to work by saving the CSV file as Windows CSV (not DOS CSV), which allowed the CSV data to be read by Oracle running on Windows XP. I think the problem had to do with different character sets being used for CSV on MAC versus CSV on Windows. Maybe if I had created my windows XP Oracle database with Unicode as the default character set, I never would have experienced this problem.
Maybe you are looking for
-
Iweb updated page does not display correctly in firefox or safari
www.rohanstevenson.com tried to update my site with a couple of cues and the wheels have completely come off. i cannot get the site to display correctly. it looks a bit better if i open the page directly from the computer, but after sending it to ftp
-
I would like to create a view on this table with four columns 1. PN#_EXP_DATE 2. PN# 3. PN#_EFF_DATE 4. PN# P_S_C A_C P EXP_DT 21698 13921 1 5/29/2009 3:15:41 PM 21698 13921 1 5/29/2009 3:54:57 PM 21698
-
Does the headphone jack have a video output?
Hi, This quesiton is regarding either generation of ipad. Does the headphone jack have a video output, akin to the ipod video? If so, what's the resolution. Thanks AS
-
Elements 11, copy paste is feathered
When I copy paste from one picture or even on the same picture the paste is feathered and almost translucent. This just started to happen to me in the last few days so im thinking i may of hit a button? I checked to make sure my anti-aliasing was not
-
What's wrong in my applet code?
Hi! I' ve a big problem with my applet. It doesn't works with some pc. For example: two same operative systems, same browsers and versions; one of that display applet without problems. The other pc (even if a jre is manually installed) doesn't show t