Large Flat Files

I have some large pipe delimited flat files (upto 1.5gb in size) from a Mainframe that i need to get into Oracle. I was just wondering if this is possible via Oracle Workbench.
Problem with using SQL*Loader was getting the files onto the database box because of company politics :)
any help or ideas much appreciated

Ninder,
The Migration Workbench won't be able to help you in your case. As you say the files are coming from a mainframe then I assume they come from DB2 running on the mainframe.
The migration workbenches - either the original 10.1.0.4 or the SQL*Developer version - do not currently support migrating DB2 on z/OS.
In any case, for migrating large amounts of data it was always recommended to use offline migration via SQL*Loader as the online migration was not really suitable.
Regards,
Mike

Similar Messages

  • Flat-File Upload - Large File Size

    Hello,
    i have to upload a very large flat-file with a size of 200 to 300 MB. The upload from the presentation server (local win xp client) fails after exceeding the time limit, because the upload performs in online-mode. I have to approaches to fix the problem.
    1. Upload the file to the application server and start upload in batch-mode.
    2. Upload the file to the application server and access/read the with "open dataset & read read dataset" line-by-line and write with a function-module into an transactional ODS-Object.
    Please provide me with informations/hints about your experience in dealing with large flat files.
    Thanks & best regards

    Hello Markus
    I would suggest you to place the file in the app server and upload in background mode.
    This is from SAP Help,
    If you want to upload a large amount of transaction data from a flat file, and you are able to specify the file type of the flat file, you should create the flat file as an ASCII file. From a performance point of view, uploading the data from an ASCII file is the most cost-effective method.
    In certain circumstances, generating an ASCII file might involve a larger workload.
    Hope it helps
    Thanks
    Chandran

  • Loading data file Flat file to Oracle DB

    I have 600/700 MB datafiles on AIX box which need to upload to Oracle using ODI.
    I am considering the below KMs ...which would be more appropriate in this case
    LKM File to Oracle
    LKM File to Oracle (EXTERNAL TABLE)
    Why should we use LKM File to Oracle (EXTERNAL TABLE) at all in any case, and do we need to create seperate table structure for that or ODI would take care internally?

    Hi,
    Loads data from a File to an Oracle staging area using the EXTERNAL TABLE SQL Command.Because this method uses the native EXTERNAL TABLE command, it is more efficient than the standard “LKM File to oracle” when dealing with large volumes of data. However,the loaded file must be accessible from the Oracle server machine.Note that the data of the source file is not duplicated in the Oracle staging area table. This table acts only as a"synonym" of the file. This may sometimes lead to Knowledge Modules if the same file is joined with other large tables in your Interface. For optimization purpose, you canenhance this LKM by adding an extra step that copies then file content to an actual physical table in the Oracle staging area Consider using this LKM if your source is a large flat file and your staging area is an Oracle database.
    Regards,
    surya

  • Process Flat Files

    Hi everyone, 
    I currently need to process about 400 txt files, equivalent to about 1TB of data in maximum 5-6 hours. I need to run a very simple scrip, where I would capture about 0.10% of the data. Something like a select function with some where clause. Originally I
    was thinking to import this data in a SQL server however I'm not sure my computer could handle the workload in a reasonable time frame. 
    What would you suggest? Is there any other products that Microsoft offers to process extremely large flat files?
    Thank you in advance.
    Cheers,

    Hi AlexB0865,
    The solution you have mentioned should be the best one in my opinion. The performance bottleneck in this case are:
    Load data from 400 text files, which is about 1 TB large. Each file should be about 2.5GB
    Bulk insert data to SQL Server
    Filter data from the 1TB data
    For the first chanllange, it won't be an issue in SSIS per my testing. I have a file with 23504761 rows, which is 2.5GB large. This file can be imported into a SQL Server table within 1 minute.
    For the second chanllange, we can split the bulk insert to improve the performance.
    For the third chanllange, with proper index(es) created, the performance won't be a problem.
    Henk details the testing result regarding this same scenario in blog:
    http://henkvandervalk.com/speeding-up-ssis-bulk-inserts-into-sql-server
    If you have any more question, please feel free to ask.
    Thanks,
    Jinchun Chen

  • InfoSpoke Flat File Extract to Logical Filename

    I'm trying to extract data from an ODS to a flat file. So far, I've found that the InfoSpoke must write to the application server for large data volume. Also, in order for the InfoSpoke to transport properly, I must use logical filenames. I've attempted to implement the custom class and append structure as defined in the SAP document "How To... Extract Data with OPEN HUB to a Customer Defined Logical Filename". I'm getting an error when attempting to import the included transports (custom class code). It appears to be a syntax error. Has anyone encountered this, and, if so, how did you fix it?

    Hello.
    I'm getting a syntax error also.  I did not import the transport, but applied the notes thru the appendix.  When I modified the method "GET_OBJECT_REF_INT" in class CL_RSB_DEST as below, I get a syntax error on the "create object" statement.
        when rsbo_c_desttype_int-file_applsrv.
    *{   REPLACE        &$&$&$&$                                          1
    *\      data: l_r_file_applsrv type ref to cl_rsb_file_applsrv.
          data: l_r_file_applsrv type ref to zcl_rsb_file_logical.
    *}   REPLACE
          create object l_r_file_applsrv
            exporting i_dest    = n_dest
                      i_objvers = i_objvers
    Class CL_RSB_DEST,Method GET_OBJECT_REF_INT
    The obligatory parameter "I_S_VDEST" had no value assigned to it.

  • Extracting data from Oracle to a flat file

    I'm looking for options to extract data a table or view at a time to a text file. Short of writing the SQL to do it, are there any other ideas? I'm dealing with large volumes of data, and would like a bulk copy routine.
    Thanks in advance for your help.

    Is there any script which i can use for pulling data from tables to a flat file and then import that data to other DB'sFlat file is adequate only from VARCHAR2, NUMBER, & DATA datatypes.
    Other datatypes present a challenge within text file.
    A more robust solution is to use export/import to move data between Oracle DBs.

  • Reducing the time interval in file Adpt to write a flat file at a location

    Hi All,
    I hav a scenario where i hav to write a flat file (<b>XXX.txt</b>) to a location. b4 doing that, i hav to check whether <b>XXX.txt</b> already exists or not. If it doesn't exists then i hav to write the <b>XXX.txt</b> file there. if it already exists, then i hav to wait until that <b>xxx.txt</b> file gets deleted.
    In the receiver file adapter v hav an option <b>file construction mode = <u><i>create</i></u></b> which does the same thing. but the problem here is it is taking too long (<b>more than 5 min</b>) which is not at all acceptable in my case (it is ok <b>if it takes 1 min</b>).
    Is there any way to <b>reduce the time interval</b> using the same option?
    Or do we hav any <b>work around solution</b> for acheiving the same scenario?
    any help wud b appreciated.
    Thnx in Adv.
    Anil

    Anil
    As far as my knowledge goes I think it is not possible because we are not going to do anything from our end. XI is doing processing and creating a file for you. But you might be sending a large file at a time. So you have to improve the performance in your scenario. You check this urls on how to improve performance in XI:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/70ada5ef-0201-0010-1f8b-c935e444b0ad
    Improving performence in XI
    Maximum Permitted File Size in XI
    ---Satish

  • Error when exporting to flat file in ODI 11g

    This works ok in ODI 10g. I'm using IKM SQL to File Append on Windows Server 2008 R2
    Getting the following error when exporting to a flat file in ODI 11g: ODI-40406: Bytes are too big for array
    I've seen a couple of threads like this on the forum, but I've never seen an answer to the problem.

    Problem is with the difference in behaviour for the IKM SQL to File Append KM between 10g and 11g.
    Our 10g target file datastore had a mixture of fixed string and numeric columns. Mapping from the source to target was simple one to one column mapping. It generated the desired fixed format flat file; numerics were right adjusted with embedded decimal point and leading spaces. Each numeric column in the generated flat file occupied the exact space allocated to it in the layout. This gave the desired results, even though documentation for the 10g IKM states that all columns in the target must be string.
    When we converted to 11g and tried to run this interface, it generated an error on the "numeric" columns because it was wrapping them in quotation marks. The result column was being treated as string, and it was larger than the defined target once it acquired those quotation marks.
    In order to get 11g to work, I had to change all the numeric columns in the target data store to fixedstring 30. I then had to change the mapping for these numeric columns to convert them to right adjusted character strings (i.e. RIGHT(SPACE(30) + RTRIM(MyNumericColumn),30).
    Now it works.

  • How to determine the size of a flat file attachment in a mail sender cc?

    Hi
    I use PI 7.11
    I have a scenario where a flat file attachment is being picked up by a MailSender adapter and if the size of the attached flat file is larger that 500 bytes the receiver is A and if the attachment is less that 500 bytes the receiver is B.
    I have checked the Context Object list in the Conditions section of Receiver Determination, and it seems, that only the file adapter allows for validation on the file size.
    I contemplated an extended receiver determination but the attachment is a flat file which just needs to be passed thru without being converted to an XML document, so my source document is not of XML format.
    An other but not very nice way would be to use an intermediate folder to drop the attachment in and from there use a file sender adapter and thus get access to the filesize attribute in my receiver determination, but I would like to avoid this.
    Any suggestions?
    BR MIkael

    Hi
    I have decided to make a module, where I plan to place the size of the attachment in the Dynamic Configuration variable "ProcessStep", but to my surprise, it seems that all the context objects in the receiver determination condition section are of type String and hence it is not possible to make a condition where I test for whether the value is smaller than say 500!? Only equal, NOT equal and LIKE are available (EX seems to be disabled as this probably only works in an xpath.
    Would one way be to use the xpath expression like this ns1:Main\ProcessStep < 500 where ns1: http://sap.com/xi/XI/System?
    MIkael

  • Extract Master Data to Flat File

    Hello
    Do you know a easy way to extract master data into a flat file?   I've tried using maintain master data but my master table is too large to display this way.

    Dear Satya,
    1) You can simply write an ABAP to access the P table ( /BIC/PInfobobject )
    2) Using infospokes
    rsbo->ZDOWNLOAD->Create->Choose Data Source as Infopbject with attributes..
    and select your fields and selections and specify the flat file destination and thatsts it..
    regards,
    Hari
    Message was edited by: Hari Kiran Y

  • Golden Gate for flat file

    hi,
    I have tried with GoldenGate for Oracle/ non-Oracle databases. Now, I am trying for flat file.
    What i have done so far:
    1. I have downloaded Oracle "GoldenGate Application Adapters 11.1.1.0.0 for JMS and Flat File Media Pack"
    2. Kept it on the same machine where Database and GG manager process exists. Port for GG mgr process 7809, flat file 7816
    3. Following doc GG flat file administrators guide Page 9 --> configuration
    4. Extract process on GG manager process_
    edit params FFE711*
    extract ffe711
    userid ggs@bidb, password ggs12345
    discardfile E:\GoldenGate11gMediaPack\V26071-01\dirrpt\EXTFF.dsc, purge
    rmthost 10.180.182.77, mgrport 7816
    rmtfile E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, purge, megabytes 5
    add extract FFE711, EXTTRAILSOURCE ./dirdat/oo*
    add rmttrail ./dirdat/pp, extract FFE711, megabytes 20*
    start extract  FFE711*
    view report ffe711*
    Oracle GoldenGate Capture for Oracle
    Version 11.1.1.1 OGGCORE_11.1.1_PLATFORMS_110421.2040
    Windows (optimized), Oracle 11g on Apr 22 2011 03:28:23
    Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.
    Starting at 2011-11-07 18:24:19
    Operating System Version:
    Microsoft Windows XP Professional, on x86
    Version 5.1 (Build 2600: Service Pack 2)
    Process id: 4628
    Description:
    ** Running with the following parameters **
    extract ffe711
    userid ggs@bidb, password ********
    discardfile E:\GoldenGate11gMediaPack\V26071-01\dirrpt\EXTFF.dsc, purge
    rmthost 10.180.182.77, mgrport 7816
    rmtfile E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, purge, megabytes 5
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 1G
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1.77G
    CACHESIZEMAX (strict force to disk): 1.57G
    Database Version:
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for 32-bit Windows: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    Database Language and Character Set:
    NLS_LANG environment variable specified has invalid format, default value will b
    e used.
    NLS_LANG environment variable not set, using default value AMERICAN_AMERICA.US7A
    SCII.
    NLS_LANGUAGE = "AMERICAN"
    NLS_TERRITORY = "AMERICA"
    NLS_CHARACTERSET = "AL32UTF8"
    Warning: your NLS_LANG setting does not match database server language setting.
    Please refer to user manual for more information.
    2011-11-07 18:24:25 INFO OGG-01226 Socket buffer size set to 27985 (flush s
    ize 27985).
    2011-11-07 18:24:25 INFO OGG-01052 No recovery is required for target file
    E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, at RBA 0 (file not opened).
    2011-11-07 18:24:25 INFO OGG-01478 Output file E:\GoldenGate11gMediaPack\V2
    6071-01\dirdat\ffremote is using format RELEASE 10.4/11.1.
    ** Run Time Messages **
    5. on Flat file GGSCI prompt-->_*
    edit params FFR711*
    extract ffr711
    CUSEREXIT E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\flatfilewriter.dll CUSEREXIT passthru includeupdatebefores, params "E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\sample-dirprm\ffwriter.properties"
    SOURCEDEFS E:\GoldenGate11gMediaPack\V26071-01\dirdef\vikstkFF.def
    table ggs.vikstk;
    add extract ffr711, exttrailsource ./dirdat/pp*
    start extract ffr711*
    view report ffr711*
    Oracle GoldenGate Capture
    Version 11.1.1.0.0 Build 078
    Windows (optimized), Generic on Jul 28 2010 19:05:07
    Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved.
    Starting at 2011-11-07 18:21:31
    Operating System Version:
    Microsoft Windows XP Professional, on x86
    Version 5.1 (Build 2600: Service Pack 2)
    Process id: 5008
    Description:
    ** Running with the following parameters **
    extract ffr711
    CUSEREXIT E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\flatfilewriter.dll CUSE
    REXIT passthru includeupdatebefores, params "E:\GoldenGate11gMediaPack\GGFlatFil
    e\V22262-01\sample-dirprm\ffwriter.properties"
    E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\ggs_Windows_x86_Generic_32bit_v11
    _1_1_0_0_078\extract.exe running with user exit library E:\GoldenGate11gMediaPac
    k\GGFlatFile\V22262-01\flatfilewriter.dll, compatiblity level (2) is current.
    SOURCEDEFS E:\GoldenGate11gMediaPack\V26071-01\dirdef\vikstkFF.def
    table ggs.vikstk;
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 1G
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1.87G
    CACHESIZEMAX (strict force to disk): 1.64G
    Started Oracle GoldenGate for Flat File
    Version 11.1.1.0.0
    ** Run Time Messages **
    Problem I am facing_
    I am not sure where to find the generated flat file,
    even the reports are showing there is no data at manager process
    I am expecting replicat instead of extract at Flatfile FFR711.prm
    I have done this much what to do give me some pointers.....
    Thanks,
    Vikas

    Ok, I haven't run your example, but here are some suggestions.
    Vikas Panwar wrote:
    extract ffe711
    userid ggs@bidb, password ggs12345
    discardfile E:\GoldenGate11gMediaPack\V26071-01\dirrpt\EXTFF.dsc, purge
    rmthost 10.180.182.77, mgrport 7816
    rmtfile E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, purge, megabytes 5
    ggsci> add extract FFE711, EXTTRAILSOURCE ./dirdat/oo
    ggsci> add rmttrail ./dirdat/pp, extract FFE711, megabytes 20
    ggsci> start extract  FFE711
    You of course need data captured from somewhere to test with. You could capture changes directly from a database and write those to a trail, and use that as a source for the flat-file writer; or, if you have existing trail data, you can just use that (I often test with old trails, with known data).
    In your example, you are using a data pump that is doing nothing more than pumping trails to a remote host. That's fine, if that's what you want to do. (It's actually quite common in real implementations.) But if you want to actually capture changes from the database, then change "add extract ... extTrailSource" to be "add extract ... tranlog". I'll assume you want to use the simple data pump to send trail data to the remote host. And I will assume that some other database capture process is creating the trail dirdat/oo
    Also... with your pump "FFE711", you can create either a local or remote trial, that's fine. But don't use a rmtfile (or extfile). You should create a trail, either a "rmttrail" or "exttrail". The flat-file adapter will read that (binary) trail, and generate text files. Trails automatically roll-over, the "extfile/rmtfile" do not (but they do have the same internal GG binary log format). (You can use a 'maxfiles' to force them to rollover, but that's beside the point.)
    Also, <ul>
    <li> don't forget your "table" statements... or else no data will be processed!! You can wildcard tables, but not schemata.
    <li> there is no reason that anything would be discarded in a pump.
    <li> although a matter of choice, I don't see why people use absolute paths for reports and discard files. Full paths to data and def files make sense if they are on the SAN/NAS, but then I'd use symlinks from dirdat to the storage directory (on Unix/Linux)
    <li> both windows and unix can use forward "/" slashes. Makes examples platform-independent (another reason for relative paths)
    <li> your trails really should be much larger than 5MB for better performance (e.g,. 100MB)
    <li> you probably should use a source-defs file, intead of a dblogin for metadata. Trail data is by its very nature historical, and using "userid...password" in the prm file inherently gets metadata from "right now". The file-writer doesn't handle DDL changes automatically.
    </ul>
    So you should have something more like:
    Vikas Panwar wrote:
    extract ffe711
    sourcedefs dirdef/vikstkFF.def
    rmthost 10.180.182.77, mgrport 7816
    rmttrail dirdat/ff, purge, megabytes 100
    table myschema.*;
    table myschema2.*;
    table ggs.*;For the file-writer pump:
    +5. on Flat file GGSCI prompt+
    extract ffr711
    CUSEREXIT flatfilewriter.dll CUSEREXIT passthru includeupdatebefores, params dirprm\ffwriter.properties
    SOURCEDEFS dirdef/vikstkFF.def
    table myschema.*;
    table ggs.*;
    ggsci> add extract ffr711, exttrailsource ./dirdat/pp
    ggsci> start extract ffr711
    Again, use relative paths when possible (the flatfilewriter.dll is expected to be found in the GG install directory). Put the ffwriter.properties file into dirprm, just as a best-practice. In this file, ffwriter.properties, is where you define your output directory and output files. Again, make sure you have a "table" statement in there for each schema in your trails.
    Problem I am facing_
    I am not sure where to find the generated flat file,
    even the reports are showing there is no data at manager process
    I am expecting replicat instead of extract at Flatfile FFR711.prm
    I have done this much what to do give me some pointers.....The generated files are defined in the ffwriter.properties file. Search for "rootdir" property, e.g.,
    goldengate.flatfilewriter.writers=csvwriter
    csvwriter.files.formatstring=output_%d_%010n
    csvwriter.files.data.rootdir=dirout
    csvwriter.files.control.ext=_data.control
    csvwriter.files.control.rootdir=dirout
    ...The main problem you have is: (1) use rmttrail, not rmtfile, and (2) don't forget the "table" statement, even in a pump.
    Also, for the flat-file adapter, it does run in just a "extract" data pump; no "replicat" is ever used. The replicats inherently are tied to a target database; the file-writer doesn't have any database functionality.
    Hope it helps,
    -m

  • File corruption with a very large .ai file

    A little background first:
    I am a graphic designer/cartographer with 15+ years of experience. I started making maps in Illustrator with version 6 and have upgraded to every version since.
    My machines:
    2x Mac Pro 8-core 3.0GHz, 16GB RAM, 10.5.7
    Mac Pro quad-core 2.66GHz, 8GB RAM, 10.5.7
    MacBook Pro 2.0GHz, 2GB RAM, 10.5.7
    Illustrator specs:
    All machines have CS4 installed as well as Illustrator 10.
    The 8-core MPs have the MAPublisher Plug-ins installed
    The 4-core and MacBook Pro (MBP) does not have the MAPublisher Plug-ins
    The problem I am having can be replicated on each of the machines. The MBP can't handle the file due to RAM. Since this occurs on machines that has MAPublisher installed and a machine that does not, I think we can rule out a plug-in issue.
    File specs:
    The original file: version 10, file size (uncompressed, no PDF support, and no font-embedding) is 36.4 MB. There are no raster effects or embedded/placed images. This is strictly a vector file. Artboard Dimensions: 85.288 in x 81.042 in
    The original file, converted with CS4, and then saved as a CS4 file: file size (uncompressed, no PDF support, and no font-embedding) is 97.9 MB.
    Brief Description of the problem:
    I have tried to convert this file into every version of CS and it has failed every time. With each version, it has resulted in an unusable file for different reasons. CS-CS3, the file was completely unusable because of the opening/saving time. It could take as long as 3 hours to save the file. With CS4, this has been rectified and I once again tried to convert it. Upon re-opening of the 'converted' CS4 native file, the file is 'corrupted'.
    The file corruption is not your regular "This file can't be opened because of: X" corruption. The file opens after a save/close just fine. It is just that parts of the file gets destroyed. To save space in this post, I have created a webpage that illustrates the problem that I am having:
    http://newatlas.com/ai_problem/
    I have tried everything possible to make the file smaller and it is as slimmed down as I can make it. (Using symbols, styles, etc.) I have also tried to eliminate this as a font problem by replacing every font with an Adobe supplied font, cleared caches, etc. This does not work, so I think we can rule out a font issue. I have also reduced this file to contain no pattern fills, no gradients, and just used simple fills and strokes. All to no avail. I have also tried piecing the file back together into a new document by copying/pasting a layer at a time. Saving, closing and re-opening after each paste cycle. I can get about 95% of it put back together and then it will manifest the problem. The only thing I haven't done is to convert all of the type to outlines. This would not solve my problem since this is a map that I continually work on year after year. I also can't remove objects or cut the overall area of the map because this file is used to produce an atlas book, a wall map and custom boundary wall maps. You can view the entire file at:
    http://okc.cocpub.com
    If I do not convert the legacy text, the file saves/closes/re-opens just fine. It just takes a very long time. So this leads me to think that the cause of the problem is the number of editable type objects that this file has. Ever since Adobe changed the Type Engine, I haven't been able to use this file in current versions of Illustrator.
    If I could get this file to open, uncorrupted, I could finally get rid of Illustrator 10. Illustrator 10 does not have any problem with this file (and is still faster than CS4 in everything except selecting a lot of objects.)
    I am posting this on the forums for any other opinions/ideas from the 'Illustrator Gurus' as a first step. I want to get in contact with someone at Adobe to see if we can address this problem and possibly get it fixed with CS5. I know that this is a user-to-user forum, but I'm not sure who, where and how to contact Adobe for this issue. Maybe someone on these forums can help with that as well.
    Thank you for your patience for getting this far in my long post and I would really appreciate any response.
    Dave

    Thanks Wade for responding,
    Did you try trashing your Adobe Illustrator CS4 Settings folder in your User's Preferences?
    Yes, I've tried deleting prefs. Basically I've tried to rule out any problems with Illustrator as a whole. This issue has also occured on a clean install of OS X and Illustrator on a new User Account with the opening of this file being the first task that Illustrator has. There is no problem with Illustrator per se, but I think it is more of a limitation in Illustrator based on the number of type objects.
    You could also try saving it out of 10 as a pdf or as postscript and distilling it then open that in ai or place it id a blank AI document.
    Did you try to place instead of opening it?
    I haven't tried any of these since the resulting file would be utterly unusable. Basically this would create a 'flat' file with 'broken' strings of text (type on a path especially) and type being uneditable. (Now that I think about it, CS4 does a much better job of opening pdfs without breaking type.) I still think this approach is not really a prudent course of action since, as of now, I can continue to maintain this map in Illy 10.
    In my experimentation, the results are as follows:
    1. Opening the file without updating the legacy type, saving the file as a new document, closing and then re-opening results in a file you would expect. Every object is where it is supposed to be. Downfall of this method: I absolutely need the type to be editable, especially the 'Street Type' since this type is actually used to create map indexes.
    2. Opening the file with updating the legacy type, saving the file as a new document, closing and then re-opening results in a file that exhibits the exact behavior that has been posted in the thread-starter. This method results in the 'Bruce Gray' type being the duplicated item.
    3. Opening the file without updating the legacy type, then splitting the file into layer groups and saving as separate files. Then opening the resulting CS4 files, updating the legacy type, copy & pasting (with layer structure) into a new document results in a usable file up to a point. I can get about 95% of it put back together and then the problem manifests. I have thought that it might be a "bad" object on one of the layers but I have ruled that out by: a.) All of the resulting sub-files (files that are portions of the larger) exhibit no problems at all. Usually our PS printers find issues that Illy does not and there is no problem in RIPing the sub-files. b.) If I change the paste order, meaning copying & pasting from the top-most layers to the bottom-most layers, vice-versa, and a completely random paste order, different objects (other than the 'Bruce Gray' type) will be duplicated. I've had one of my park screens, a zip code type object and a school district boundary be the duplicated object.
    All of these experiments has lead me to believe that the Illustrator Type Engine is the main facilitor. I just don't think it can handle that many individual point type objects. I know CS4 can handle the number of objects, based on the fact that a legacy type (non-updated) file works.
    I am almost entirely sure that Illustrator is working exactly as it is supposed to and that the vast majority of Illy users will never run into this issue. This file is by far the largest file that I work on. I would just like to be able to use an Intel native version of CS to continue maintaining this map.
    On a side note: About three years ago, I tried working with this file in Freehand MX. Freehand initially would open the Illy file without a problem. I could work on it but when I would save it as a Freehand file, close it and re-open it, I would get your standard File Corruption. It would partially open, give me a corruption dialog, and open the file as a blank document. I alwa ys knew there was a reason to use Illustrator over Freehand for making maps.

  • Working with Flat files in OWB

    I am getting flat files from mainframe tables and I created external tables in OWB for mapping. I get data from flat files sometimes and I dont get data even though the file exists and there is no way to debug where the problem is in the files.
    When we import the file metadata do we have to specify the column attributes or we should leave it 0 so that it take default of 255.
    If you have any process on how files should be created which is practised in your place without problems pl let me know.
    thnks...

    Hi,
    Do you mean that you are not getting flat file data into the external tables ? in that case see the bad file and try to analyze the reason. Most of the lengths of the fields set up in the external table creates a problem. In that case increase the length of the external table fields to a comfortably large value and then try.
    Regards
    -AP

  • Loading Flat File and Temporary Table question

    I am new to ODI. Is there a method to load a flat file directly in to the target table and not have ODI create a temporary file? I am asking this in trying to find out how efficient ODI is in bulk loading a very large set of data.

    You can always modify the KM to skip the loading of the temporary table and load the target table directly. Another option would be to use the SUNOPSIS MEMORY ENGINE as your staging database. But, this option has some drawbacks, as it relies on available memory for the JVM and is not suitable for large datasets (which you mention you have).
    http://odiexperts.com/11g-oracle-data-integrator-%E2%80%93-part-711g-%E2%80%93-sunopsis-memory-engine/
    Regards,
    Michael Rainey
    Edited by: Michael R on Oct 15, 2012 2:44 PM

  • SQL Server 2012 Express bulk Insert flat file 1million rows with "" as delimeter

    Hi,
    I wanted to see if anyone can help me out. I am on SQL server 2012 express. I cannot use OPENROWSET because my system is x64 and my Microsoft office suit is x32 (Microsoft.Jet.OLEDB.4.0).
    So I used Import wizard and is not working either. 
    The only thing that let me import this large file, is:
    CREATE TABLE #LOADLARGEFLATFILE
    Column1
    varchar(100), Column2 varchar(100), Column3 varchar(100),
    Column4 nvarchar(max)
    BULK INSERT
    #LOADLARGEFLATFILE
    FROM 'C:\FolderBigFile\LARGEFLATFILE.txt'
    WITH 
    FIRSTROW = 2,
    FIELDTERMINATOR ='\t',
    ROWTERMINATOR ='\n'
    The problem with CREATE TABLE and BULK INSERT is that my flat file comes with text qualifiers - "". Is there a way to prevent the quotes "" from loading in the bulk insert? Below is the data. 
    Column1
    Column2
    Column3
    Column4
    "Socket Adapter"
    8456AB
    $4.25
    "Item - Square Drive Socket Adapter | For "
    "Butt Splice" 
    9586CB
    $14.51
    "Item - Butt Splice"
    "Bleach"
    6589TE
    $27.30
    "Item - Bleach | Size - 96 oz. | Container Type"
    Ed,
    Edwin Lopera

    Hi lgnusLumen,
    According to your description, you use BULK INSERT to import data from a data file to the SQL table. However, to be usable as a data file for bulk import, a CSV file must comply with the following restrictions:
    1. Data fields never contain the field terminator.
    2. Either none or all of the values in a data field are enclosed in quotation marks ("").
    In your data file, the quotes aren't consistent, if you want to prevent the quotes "" from loading in the bulk insert, I recommend you use SQL Server Import and Export Wizard tools in SQL Server Express version. area, it will allow to strip the
    double quote from columns, you can review the following screenshot.
    In other SQL Server version, we can use SQL Server Integration Services (SSIS) to import data from a flat file (.csv) with removing the double quotes. For more information, you can review the following article.
    http://www.mssqltips.com/sqlservertip/1316/strip-double-quotes-from-an-import-file-in-integration-services-ssis/
    In addition, you can create a function to convert a CSV to a usable format for Bulk Insert. It will replace all field-delimiting commas with a new delimiter. You can then use the new field delimiter instead of a comma. For more information, see:
    http://stackoverflow.com/questions/782353/sql-server-bulk-insert-of-csv-file-with-inconsistent-quotes
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

Maybe you are looking for