Fastest way to import the data for big tables

Hi friends,
I have recently joined a billing product set up team for TELECOM giant.As part of the setup process /initial testing we need to put huge data (nearly 1.5 billion records/~200 GB size of data) in some tables. Currently i am using impdp for importing .It takes a lot of time (nearly >9 hours for loading this data).
I have tried the below ways to import the data(All these tables are partitioned):
1. Normal impdp (single thread i.e. parallel=1): This one takes a lot of time (>24 hours).
2. Normal impdp but partition-wise .This one is completed in relatively less amout of time compared to the 1st method.
3. Drop drop the indexes ,do inset append from another schema in same instance which has the data and then recreate the indexes ( total process takes ~9 hours)
My questions to all:
1.Is there any other way/trick in the book which I can try to put the data in less amount of time
2.What i have observed that even if I give parallel=8 or any number ,sometimes parallel workers are spawned and sometimes not. Can someone educate me why ?
3.How can i come to know (before running the impdp ) that my parallel clause is going to spawn the parallel workers or not. I have searched on this subject but no success as to how I will come to know this.
I dont know what strategy to follow ,because this is a involved and repetitive task for me. Due to the above task I am not able to concentrate on other DBA tasks due to this which come to me.
Cheers,
Kunwar

assuming you have a table with 3 indexes, then create a script for each index (reblace index_name_n with your index name) and start them at the same time, after the import of the table is completed
DUMPFILE=<DUMPFILE> or NETWORK_LINK=<DBLINK>
DIRECTORY=DATA_PUMP_DIR
LOGFILE=<IMP_INDEX_NAME_n>
CONTENT=ALL
PARALLEL=1
JOB_NAME=IMP_<INDEX_NAME_n>
INCLUDE=TABLE_EXPORT/TABLE/INDEX:"IN('<INDEX_NAME_n>')"
TABLES=<SCHEMA>.<TABLE>change following settings during import:
increase pga_aggregate_target
increase db_writer_processes
db_block_checking=false
db_block_checksum=false
set undo and temp ts to autoextend
set noarchivelogmode
create 4GB redo logs
hth

Similar Messages

  • The fastest way to extract the data from Sun One Directory Server 5.2

    I'm trying to figure out the best way to extract the whole contents of the Directory Tree of a Sun One DS 5.2. I assum that the best way is db2ldif but it takes about 17 min for about 1.5 GB size database, which seems quite a long time. Is there a faster way?

    Thanks,
    Actually the file I need should be readable - I need to parse it later on. But I think I just found the answer in the development kit. The utility is called dbscan and it works directly on the database files.
    Thanks again anyway,
    Ayelet

  • Import a Date for Sorting

    I had a large database of my songs on MS Access which included the following info for each song: Title(Name), Artist and Date. The Date was the mm/dd/yy which the song first entered the Top 40 according to Billboard (from one of their reference books).
    I have imported all my music into iTunes and while I am satisfied with the automatic entry of Name & Artist during the rip, I would like to find a way to copy or enter the Top 40 Date for each song. I'd like to create playlists sorted by the Top 40 Date.
    Is it possible to add this info - hopefully in a scripted way?
    The only solution I can see is to enter a date in "yyyy-mm-dd" format into an unoccupied field such as "Sort Show" which is found under the SORTING Tab when you do a "Get Info" (ctrl-I) on a song. This requires entering the date for each song manually which in my case is prohibitive since I have over 2000 songs.
    Are there any scripts, apps or suggestions that simplify the process? Any easy way to import the data?

    Hi,
    1) getting date from Excel:
    What format does the date have?
    Are you able to convert the value to the Date Type ?
    2) putting into the SBO form:
    Are you filling it into the DataSource, or just typing it as a String into the EditText?
    These are the basic questions, you have to ask yourself.
    1)
    For example: there is a difference when accessing the Excell Cells(x,y).Value and the Cells(x,y).Text
    .Value is usually in Date_Variable format, but the Text is formated to for example MM-DD-YYYY format.
    Perhaps you're not able to convert the MM-DD-YYYY into the date variable, thus you will have to change the "MM-DD-YYYY" into the "DD.MM.YYYY" format so you can put it somewhere.
    2)
    Same problem is with the insertion of date into the SBO, when filling the EditText.String, you will have to convert it into the date, according to the SBO settings ... for example: "MM.DD.YYYY"
    There is also the SBObob object, which has the "Format_StringToDate" method, which will convert the string, into the format for DataSource...
    Regards,
    Jaro

  • How to get the date for the last day of a week?

    Is there a easy way to get the date for the last day of week?
    eg a week starts on monday and end on sunday
    January 11, 2005 is the start date for the week
    January 17, 2005 is the end date for the week
    or
    say
    February 26, 2003 is the start date for the week
    March 5, 2003 is the end date for the week
    I just need a simple way of figuring that out....
    I figured out how to get the start date for the week but just can't get the latter..
    formatting of the date is not of a concern.. that I know how to do
    thanks in advance

    How about something like the following?
         Calendar someDay = new GregorianCalendar(2005,0,11);//2005 Jan, 11
         //Note above that January is 0, not 1, as counting starts from 0.
          someDay.add(Calendar.DAY_OF_MONTH,6); //add 6 days
         java.util.Date  lastDayOfWeek = someDay.getTime();
         //If someDay was the start of a week, lastDayOfWeek should now be
         //the last day of that week.
         System.out.println(lastDayOfWeek.toString() );

  • How to populate the data for additional fields in custom report of fbl5n tr

    hallo friends,
    i ha ve to add some fields to the output of custome report of transaction FBL5N.
    Till now i have only added the fields to the output.
    Now i have to write the code to populate the data for those fields in the program.
    1.     Customer Credit Group(ACM/RCM): (Table: KNKK; Field Name: VKORG )
    2.     Credit Representative Group: (Table: KNKK; Field Name: SBGRP)
    3.     Customer Account Number(CAN#): (Table: KNKK; Field Name: KNKLI)
    4.     Alternative Payer(ALTP#) : (Table: KNA1; Field Name: KUNNR)
    5.     Risk Category: (Table: KNKK; Field Name: CTLPC)
    6.     Credit Info Number: (Table: KNKK; Field Name: KRAUS)
    7.     Rating: (Table: KNKK; Field Name: DBRTG)
    8.     Payment Index: (Table: KNKK; Field Name: DBPAY)
    9.     Credit control area: (Table: KNKK; Field Name: KKBER)
    10.     Company code: (Table: KNB1; Field Name: BUKRS)
    11.     Sales Organization: (Table: KNVV; Field Name: VKORG)
    These are al fields i have to populate in the program..
    MY ATTEMPTS:
    I tried getting the data for KNKK table by GET KNKK ,but it is giving some garbage values...
    And Logical database used here is DDF,and i have to add the VKORG of KNVV too...but KNVV is not there in DDF...
    Can anybody tell me how should i proceed..
    thanks in advance.

    Thnx Andreas,
    i have following doubts again..
    I have to add sales organisation field to the selection screen...(this is given in the req.)
    now if i will say GET KNKK...it would not keep account of this 'sales organisation' field...so how should i fetch the data with these input data( i.e. 'compny code' and 'customer' are the fields given by LDB and 'sales oraganisation' is the field i put it on the selection screen)
    And what about the KNVV-VKORG ,because the KNVV is not present in the LDB.

  • Is there a way to sync my music from my ipod touch to a computer without using other computers? because i lost my computer which has all the data for my ipod touch

    is there a way to sync my music from my ipod touch to a computer without using other computers? because i lost my computer which has all the data for my ipod touch

    See also Recover your iTunes library from your iPod or iOS device.
    tt2

  • IMovie: whats the fastest way to import and save all original media?

    Good afternoon. Ok I need help please. Is there a way to simply take all the imported original media and just save it all in one fell swoop? Rather than having to import and then process/export clip by clip.
    I thought I had a quicker way just grabbing all the original media from the imported camera (right clicking imovie library > show package contents > original media) untill I tried to play one of the original media files - no sound and it was mov file. I assume mp4 is better???
    I have 2800 hours of video I need to import off my camera. I don't need to modify them at all, aside from the occasional combining of clips into one movie. So I need the fastest way. Importing and then exporting clip by clip will take me forever. I would prefer to just import and then just drag all 50 clips or so into my movies folder. But as stated above that does not seem to work as expected unless I'm missing a step.
    Thank you!

    ps Can I take the original media files (mov format) and batch convert them using handbrake or MPEG streamclip? And if so would the resulting videos then have sound? And which is the best format (container codec whatever) to export them to? Thanks!

  • Is there is any way to find the data transfer from client to Configuration Manager for health monitoring and hardware Inventory

    Hi
    Can Configuration Manager provide a way to find the data transfer from client to Configuration Manager for health monitoring and hardware Inventory. How can I know what amount of data is consumed during that process

    Place archive_reports.sms in %systemroot%\ccm\inventory\temp\ for both 64-bit and 32-bit computers.
    There are two situations where you can use this depending on the type of client:
    1. To keep inventory reports on a client (that is not an MP), create the following file:
    %systemroot%\ccm\inventory\temp\archive_reports.sms
    2. To keep inventory reports on a MP (that is also a client), create the following file:
    <x>:\sms_ccm\inventory\temp\archive_reports.sms
    The XML file will be saved in the inventory\temp folder.
    More information on the above here: http://blogs.technet.com/b/configurationmgr/archive/2012/09/17/controlling-configuration-manager-2012-using-hidden-files.aspx

  • Is there a way of partitioning the data in the cubes

    Hello BPC Experts,
    we are currently running an Appset with 4 Applciations. Anyway two of these are getting really big.
    In BPC for MS there is a way to partitioning the data as I saw in the How tos.
    In NW Versions the BPC queries the Multiprovider. Is there a way to split the underlying Basis Cube to several (split by time or Legal Entity).
    I think this would help to increase the speed a lot as data could be read in parallel.
    Help is very much appreciated.
    Daniel
    Edited by: Daniel Schäfer on Feb 12, 2010 2:16 PM

    Hi Daniel,
    The short answer to your question is that, no, there is not a way to manually partition the infocubes at the BW level. The longer answer comes in several parts:
    1. BW automatically partitions the underlying database tables for BPC cubes based on request ID, depending on the BW setting for the cube and the underlying database.
    2. BW InfoCubes are very different from MS SQL server cubes (ROLAP approach in BW vs. MOLAP approach usually used in Analysis Services cubes). This results in BW cubes being a lot smaller, reads and writes being highly parallel, and no need for a large rollup operation if the underlying data changes. In other words, you probably wouldn't gain much from semantic partitioning of the BW cubes underlying BPC, except possibly in query performance, and only then if you have very high data volumes (>100 million records).
    3. BWA is an option for very large cubes. It is expensive, but if you are talking 100s of millions of records you should probably consider it. It uses a completely different data model than ROLAP or MOLAP and it is highly partition-able, though this is transparent to the BW administrator.
    4. In some circumstances it is useful to partition BW cubes. In the BW world, this is usually called "semantic partitioning". For example, you might want to partition cubes by company, time, or category. In BW this is currently supported through manually creating several basic cubes under a multiprovider. In BPC, this approach is not supported. It is highly recommended to not change the BPC-generated Infocubes or Queries in any way.
    5. If you have determined that you really need to semantically partition to manage data volumes in BPC, the current best way is probably to have multiple BPC applications with identical dimensions. In other words, partition in the application layer instead of in the data layer.
    Hopefully that's helpful to you.
    Ethan

  • One of the folders on my external hard drive has transformed into a unix executable file and I can no longer access my files. Is there any way to save the data?

    One of the folders on my external hard drive has transformed into a unix executable file and I can no longer access my files. Is there any way to save the data?

    Wow, have seen Files do that, but a whole Folder as I recall!
    Could be many things, we should start with this...
    "Try Disk Utility
    1. Insert the Mac OS X Install disc, then restart the computer while holding the C key.
    2. When your computer finishes starting up from the disc, choose Disk Utility from the Installer menu. (In Mac OS X 10.4 or later, you must select your language first.)
    Important: Do not click Continue in the first screen of the Installer. If you do, you must restart from the disc again to access Disk Utility.
    3. Click the First Aid tab.
    4. Select your Mac OS X volume.
    5. Click Repair. Disk Utility checks and repairs the disk."
    http://docs.info.apple.com/article.html?artnum=106214
    Then try a Safe Boot, (holding Shift key down at bootup), run Disk Utility in Applications>Utilities, then highlight your drive, click on Repair Permissions, reboot when it completes.
    (Safe boot may stay on the gray radian for a long time, let it go, it's trying to repair the Hard Drive.)

  • Custom batch rename files with Aperture 3 in the following format: IMG_0023.cr2 to Smith_YYMMDD_0023.cr2?  I cannot find a way to structure the date in Aperture as such, as well as extract only the camera file

    Please advise how to custom batch rename files with Aperture 3 in the following format: IMG_0023.cr2 to Smith_120816_0023.cr2?  I cannot find a way to structure the date in Aperture as such (YYMMDD), as well as extract only the camera file (0023, for example).  Adobe Bridge CS5 can do this, but NONE of the Adobe software is retina optimized, and is terrible to look at.

    In Aperture you are limited to renaming files by the entries in the File Naming preset window.
    At what point are you looking to rename, import or export? It might be possible to do what you are looking to do external to Aperture either via a script or other software.
    regards

  • How to import the data to Data manager

    Hi All,
    I am new to MDM and need your help in handling the following issue
    In our setup we are sending the Customer master data from R/3->XI->MDM (R/3-> XI IDOCs,  XI->MDM XML files).
    with the help of extraction job in R/3 IDOCs r generated and XI has onverted these IDOCs to XML Files and created at MDM Server. Now my Question is how to import the Data from these XML files to Data Manager !!!!
    Note : Import Map settings are done already in the system.
    Can any one help me the steps requeired to get this data to Data manager .
    and am I need to do some activity in MDM to import the data to Data manager each and every time or is there any automatic way of doing this .
    Please guide me.
    Thanks in advance!

    Hi MDM User,
    Generally there are 2 cases while importing the data,
    1) Initial Load
    2) Delta Load
    Initial Load means importing the data into MDM at first time.This is usually done by manual importing
    without using import server.
    Delta Load means importing the data into MDM when we have changes to existing data or added new data
    to existing data.This is usually done by automatically using import server.
    To manually import the data,just create a remote system and port in console and login into import manager
    using remote system.Create ,save the map and execute the map.There is no other configurations needed.
    To automatically import the data without manual intervention,we go for import server.
    To let this happen we need to set some configuarations in mdis.ini file of import server.
    Give the repository name,username and password which are set in console for that repository and also
    give schedule time based on which import server picks the file from ready folder.
    Go to the import manager , change the map and import action depends on your requriment and save that
    map.
    and also you need to set port type as inbound and automatic and select the map which you saved earlier in
    MDM console.
    As soon as we start the import server, it picks the file from ready folder of MDM server based on
    configurations which we set in mdis.ini file and executes the map.
    Here i am giving complete path of ready folder.
    SAP MDM 5.5> Server>Distributions>bp1bocap080.bp.co_MSQL> BP3_PoC_Customer> Inbound>Siebel> SIEBEL_OB_CUS_SIEBELCUS01> Ready)
    This path has to be given in target directory of receiver communication channel of XI so that it puts XML
    file in the above path.
    For better understanding,go through below blogs.
    /people/balas.gorla/blog/2007/02/05/r3-xi-mdm-outbound-scenario
    /people/balas.gorla/blog/2006/09/27/mdm-xi-r3-integration
    Hope it helps
    Reward points,if found useful
    Thanks
    Narendra

  • Need a script to import the data from flat file

    Hi Friends,
    Any one have any scripts to import the data from flat files into oracle database(Linux OS). I have to automate the script for every 30min to check any flat files in Incoming directory process them with out user interaction.
    Thanks.
    Srini

    Here is my init.ora file
    # $Header: init.ora 06-aug-98.10:24:40 atsukerm Exp $
    # Copyright (c) 1991, 1997, 1998 by Oracle Corporation
    # NAME
    # init.ora
    # FUNCTION
    # NOTES
    # MODIFIED
    # atsukerm 08/06/98 - fix for 8.1.
    # hpiao 06/05/97 - fix for 803
    # glavash 05/12/97 - add oracle_trace_enable comment
    # hpiao 04/22/97 - remove ifile=, events=, etc.
    # alingelb 09/19/94 - remove vms-specific stuff
    # dpawson 07/07/93 - add more comments regarded archive start
    # maporter 10/29/92 - Add vms_sga_use_gblpagfile=TRUE
    # jloaiza 03/07/92 - change ALPHA to BETA
    # danderso 02/26/92 - change db_block_cache_protect to dbblock_cache_p
    # ghallmar 02/03/92 - db_directory -> db_domain
    # maporter 01/12/92 - merge changes from branch 1.8.308.1
    # maporter 12/21/91 - bug 76493: Add control_files parameter
    # wbridge 12/03/91 - use of %c in archive format is discouraged
    # ghallmar 12/02/91 - add global_names=true, db_directory=us.acme.com
    # thayes 11/27/91 - Change default for cache_clone
    # jloaiza 08/13/91 - merge changes from branch 1.7.100.1
    # jloaiza 07/31/91 - add debug stuff
    # rlim 04/29/91 - removal of char_is_varchar2
    # Bridge 03/12/91 - log_allocation no longer exists
    # Wijaya 02/05/91 - remove obsolete parameters
    # Example INIT.ORA file
    # This file is provided by Oracle Corporation to help you customize
    # your RDBMS installation for your site. Important system parameters
    # are discussed, and example settings given.
    # Some parameter settings are generic to any size installation.
    # For parameters that require different values in different size
    # installations, three scenarios have been provided: SMALL, MEDIUM
    # and LARGE. Any parameter that needs to be tuned according to
    # installation size will have three settings, each one commented
    # according to installation size.
    # Use the following table to approximate the SGA size needed for the
    # three scenarious provided in this file:
    # -------Installation/Database Size------
    # SMALL MEDIUM LARGE
    # Block 2K 4500K 6800K 17000K
    # Size 4K 5500K 8800K 21000K
    # To set up a database that multiple instances will be using, place
    # all instance-specific parameters in one file, and then have all
    # of these files point to a master file using the IFILE command.
    # This way, when you change a public
    # parameter, it will automatically change on all instances. This is
    # necessary, since all instances must run with the same value for many
    # parameters. For example, if you choose to use private rollback segments,
    # these must be specified in different files, but since all gc_*
    # parameters must be the same on all instances, they should be in one file.
    # INSTRUCTIONS: Edit this file and the other INIT files it calls for
    # your site, either by using the values provided here or by providing
    # your own. Then place an IFILE= line into each instance-specific
    # INIT file that points at this file.
    # NOTE: Parameter values suggested in this file are based on conservative
    # estimates for computer memory availability. You should adjust values upward
    # for modern machines.
    # You may also consider using Database Configuration Assistant tool (DBCA)
    # to create INIT file and to size your initial set of tablespaces based
    # on the user input.
    # replace DEFAULT with your database name
    db_name=DEFAULT
    db_files = 80 # SMALL
    # db_files = 400 # MEDIUM
    # db_files = 1500 # LARGE
    db_file_multiblock_read_count = 8 # SMALL
    # db_file_multiblock_read_count = 16 # MEDIUM
    # db_file_multiblock_read_count = 32 # LARGE
    db_block_buffers = 100 # SMALL
    # db_block_buffers = 550 # MEDIUM
    # db_block_buffers = 3200 # LARGE
    shared_pool_size = 3500000 # SMALL
    # shared_pool_size = 5000000 # MEDIUM
    # shared_pool_size = 9000000 # LARGE
    log_checkpoint_interval = 10000
    processes = 50 # SMALL
    # processes = 100 # MEDIUM
    # processes = 200 # LARGE
    parallel_max_servers = 5 # SMALL
    # parallel_max_servers = 4 x (number of CPUs) # MEDIUM
    # parallel_max_servers = 4 x (number of CPUs) # LARGE
    log_buffer = 32768 # SMALL
    # log_buffer = 32768 # MEDIUM
    # log_buffer = 163840 # LARGE
    # audit_trail = true # if you want auditing
    # timed_statistics = true # if you want timed statistics
    max_dump_file_size = 10240 # limit trace file size to 5 Meg each
    # Uncommenting the line below will cause automatic archiving if archiving has
    # been enabled using ALTER DATABASE ARCHIVELOG.
    # log_archive_start = true
    # log_archive_dest = disk$rdbms:[oracle.archive]
    # log_archive_format = "T%TS%S.ARC"
    # If using private rollback segments, place lines of the following
    # form in each of your instance-specific init.ora files:
    # rollback_segments = (name1, name2)
    # If using public rollback segments, define how many
    # rollback segments each instance will pick up, using the formula
    # # of rollback segments = transactions / transactions_per_rollback_segment
    # In this example each instance will grab 40/5 = 8:
    # transactions = 40
    # transactions_per_rollback_segment = 5
    # Global Naming -- enforce that a dblink has same name as the db it connects to
    global_names = TRUE
    # Edit and uncomment the following line to provide the suffix that will be
    # appended to the db_name parameter (separated with a dot) and stored as the
    # global database name when a database is created. If your site uses
    # Internet Domain names for e-mail, then the part of your e-mail address after
    # the '@' is a good candidate for this parameter value.
    # db_domain = us.acme.com      # global database name is db_name.db_domain
    # FOR DEVELOPMENT ONLY, ALWAYS TRY TO USE SYSTEM BACKING STORE
    # vms_sga_use_gblpagfil = TRUE
    # FOR BETA RELEASE ONLY. Enable debugging modes. Note that these can
    # adversely affect performance. On some non-VMS ports the db_block_cache_*
    # debugging modes have a severe effect on performance.
    #_db_block_cache_protect = true # memory protect buffers
    #event = "10210 trace name context forever, level 2" # data block checking
    #event = "10211 trace name context forever, level 2" # index block checking
    #event = "10235 trace name context forever, level 1" # memory heap checking
    #event = "10049 trace name context forever, level 2" # memory protect cursors
    # define parallel server (multi-instance) parameters
    #ifile = ora_system:initps.ora
    # define two control files by default
    control_files = (ora_control1, ora_control2)
    # Uncomment the following line if you wish to enable the Oracle Trace product
    # to trace server activity. This enables scheduling of server collections
    # from the Oracle Enterprise Manager Console.
    # Also, if the oracle_trace_collection_name parameter is non-null,
    # every session will write to the named collection, as well as enabling you
    # to schedule future collections from the console.
    # oracle_trace_enable = TRUE
    # Uncomment the following line, if you want to use some of the new 8.1
    # features. Please remember that using them may require some downgrade
    # actions if you later decide to move back to 8.0.
    #compatible = 8.1.0
    Thanks.
    Srini

  • How many ways to edit the data in PSA?

    HI
    How many ways to edit the data in PSA? Can we edit both transaction, master data also?

    Hi.......
    Yes yo can edit data in PSA for both Master data and transaction data.......the only diffrence is that in transaction data you have to delete the request from the target before editing......and in master data no need to delete any request....directly edit the PSA........
    To edit PSA......the most easiest method has alsredy mentioned ............
    But suppose you don't have authorization to edit the data in PSA but still you want to edit then you can follow the process mentioned below..........
    You can call up the function module RSAR_ODS_API_GET with the list of request IDs given by the function module RSSM_API_REQUEST_GET. The function module RSAR_ODS_API_GET no longer recognizes InfoSources on the interface, rather it recognizes the request IDs instead. With the parameter I_T_SELECTIONS, you can restrict reading data records in the PSA table with reference to the fields of the transfer structure. In your program, the selections are filled and transferred to the parameter I_T_SELECTIONS.
    The import parameter causes the function module to output the data records in the parameter E_T_DATA. Data output is unstructured, since the function module RSAR_ODS_API_GET works generically, and therefore does not recognize the specific structure of the PSA. You can find information on the field in the PSA table using the parameter E_T_RSFIELDTXT.
    RSAR_ODS_API_PUT
    After merging or checking and subsequently changing the data, you can write the altered data records into the PSA table with the function module RSAR_ODS_API_PUT. To be able to write request data into the table with the help of this function module, you have to enter the corresponding request ID. The parameter E_T_DATA contains the changed data records.
    check these links:
    http://help.sap.com/saphelp_nw04/helpdata/en/4f/8d4b38187a8442e10000009b38f8cf/frameset.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/67efb9bb-0601-0010-f7a2-b582e94bcf8a
    Hope this helps.......
    Regards,
    Debjani......

  • My iPhone 5 has broken and is being replaced with a new iPhone tomorrow. However, My carrier (orange) will b picking up my broken iPhone and I am unsure how to secure the content and icloud data on the broken phone. Is there a way to display the data?

    My iPhone 5 has broken and is being replaced with a new iPhone tomorrow. However, My carrier (orange) will b picking up my broken iPhone and I am unsure how to secure the content and icloud data on the broken phone. Is there a way to disable the data held on it and ensure that if it is fixed, nobody can use/see my data and access my account?

    Hi Gazpan,
    Thanks for visiting Apple Support Communities.
    I recommend using the steps in this article to back up your iPhone if possible:
    iOS: Back up and restore your iOS device with iCloud or iTunes
    http://support.apple.com/kb/ht1766
    You may also find this advice helpful for your situation:
    What to do before selling or giving away your iPhone, iPad, or iPod touch
    http://support.apple.com/kb/ht5661
    If you no longer have your iOS device
    If you're using iCloud and Find My iPhone on the device, you can erase the device remotely and remove it from your account by signing in to icloud.com/find, selecting the device, and clicking Erase. When the device has been erased, click Remove from Account.
    If you're unable to complete either of the above steps, you should change your Apple ID password. Changing your password won't remove any personal information that is cached on the device, but it will make sure that the new owner can't delete your information from iCloud.
    Cheers,
    Jeremy

Maybe you are looking for