Large amounts of multi-level content

I’m developing texts for screens in a web-based resource manual, but am doing so way ahead of technical decisions (and funding) on what the software structure of that manual will eventually be. (I know, but ….) I know that some/all of what I am doing will have to be changed once they get to that point, but in the meantime have been asked to continue working on the texts.
The issue I’m running up against is how to eventually be able to present multiple levels of texts in an easily accessible, clearly understandable format. The audience initially would be PC users looking at my content, but I think the people who eventually work on the software would at least need to be able to copy my texts. With regard to content, I’ll have text page A which will eventually need to link to pages B, C and D; page B will need to link to A, D, E, F, H and X; and so on … I don’t know for sure, but am guessing we’re looking at thousands of pages eventually with lots and lots of links. The material on the pages is mostly text, although some tables, charts and flowchart-type diagrams will be in the mix. I won’t be using photos or other media files.
Up until now, I’ve been using Word (since its easily share-able with my PC-only colleagues), and working without links. But it’s getting to the point where I’ve got so much info that nobody but me would understand my personal notation system indicating what would eventually be linked where. I’ve played around with using Word’s hyperlinking, but have run into the problem that as soon as I set up a hyperlink, the document linked to seems to become read-only … so if I edit the linked-to document (which is likely – my material also changes pretty frequently), I now have multiple versions of the page floating around and would have to update all of my links.
And I’ve played with iweb – assuming it published OK and there were no issues with new versions or PC-type browsers not being able to read the website, it seems like it might work although I find dealing with objects and text editing more difficult than in Word. It’s also disappointing that it’s not easy to insert and hyperlink to anchors. But I wonder about the iweb assumptions I mentioned above, and whether iweb has the capacity to handle at least many hundreds of pages. BTW I’m a newbie to this (don’t know HTML, Java, etc.).
So, bottom line, would appreciate any general comments or suggestions on how to be able to present all this info clearly, and specifically wonder if anyone has info on dealing with Word hyperlinking, and the capacity of iweb to be able to do what I need. I am looking to use software I already have (Office, Pages, iweb, etc.), but might be able to spring for something else if it were low-cost. Thanks in advance!

I don't think iWeb makes sense for your project, both in terms of its size and purpose. The need to constantly back up your Domain file, and the inability to do anything to your site without a Mac, iWeb, and the latest Domain file for it, along with possible issues with Windows browsers, seem like major handicaps. You might want to look at something like NVU/Kompozer:
http://www.nvu.com/

Similar Messages

  • If I have a large amount of music/movie content in the cloud, how do I get it onto my iPad/iPhone so I can hear/see it when I dont have access to WiFi?

    If I have a large amount of music/movie content in the cloud, how do I get it onto my iPad/iPhone so I can hear/see it when I dont have access to WiFi?

    If you go into the Purchased tab in the iTunes store apps on your iPad and iPhone then you might be able to redownload them from there - they should then be accessible in the Music / Videos app without you needing to be be online.

  • Updated content for multi-level record structure in PLSQL

    Hi All,
    please help me
    Need a FUNCTION which would take PERSON_id as the INPUT PARAMETER and RETURN the FULL DETAILS in a multi-level record structure in PLSQL.
    CREATE TABLE people(
    name VARCHAR2(5),
    person_id number
    INSERT INTO people(name,person_id) VALUES ('n1',1);
    INSERT INTO people(name,person_id) VALUES ('n2',2);
    INSERT INTO people(name,person_id) VALUES ('n3',3);
    INSERT INTO people(name,person_id) VALUES ('n4',4);
    INSERT INTO people(name,person_id) VALUES ('n5',5);
    INSERT INTO people(name,person_id) VALUES ('n6',6);
    A person can be assigned many tasks. Below,we can see person_id =1 has 2 tasks 10,20
    CREATE TABLE tasks(
    task_id number,              --->PK
    task_name VARCHAR2(5),
    person_id number          -->FK to People
    INSERT INTO tasks(task_id,task_name,person_id) VALUES (10, 'cleaning',1);
    INSERT INTO tasks(task_id,task_name,person_id) VALUES (20', 'washing',1);
    INSERT INTO tasks(task_id,task_name,person_id) VALUES (30, 'sweeping',2);
    INSERT INTO tasks(task_id,task_name,person_id) VALUES (40, 'ironing',3);
    Each TASK has many ACTIVTIES as below. Below,we can see task_id =10 ---->3 ACTIVITIES of activity_id of 100,200,300
    CREATE TABLE activities(
    activity_id number,
    activity_name VARCHAR2(50),
    task_id number,              --->FK  to task table
    INSERT INTO activities(activity_id,activity_name,task_id)VALUES (100, 'Clean home',10);
    INSERT INTO activities(activity_id,activity_name,task_id)VALUES (200, 'Clean Garden',10);
    INSERT INTO activities(activity_id,activity_name,task_id)VALUES (300, 'Clean clothes,10');
    INSERT INTO activities(activity_id,activity_name,task_id)VALUES (400, 'Wash car',20);
    Write a FUNCTION which would take PERSON_id as the INPUT PARAMETER and RETURN the FULL DETAILS in a multi-level record structure in PLSQL.
    Meaning We would get
    First ,person_details for a person-->next level is TASKS-->Activities_list in a NESTED RECORD SET
    create or replace function person_details(person_id NUMBER)
    RETURNs a
    PERSON_DETAILS RECORD structure as shown below.A person -->Tasks--> all activities
    record PERSON_DETAILS[1]
         person_id,
         name
         TASKS_DETAILS[1]          ---->2nd level
                     task_id[1],
               task_name[1]
                   ACTIVITIES[1]----->3rd level
                        activity_id[1],
                        activity_name[1]     
                      ACTIVITIES[2]
                        activity_id[2],
                        activity_name[2]                    
                   ACTIVITIES[3]
                        activity_id[3],
                        activity_name[3]
    *(1)--Most important part is how do i DEFINE AND DECLARE the RESULT RECORD SET in my FUNCTION?
      (2)--HOW DO WE DYNAMICALLY ALOCATE MEMORY for the record structure based on the no. of rows by each SELECT?
       (3)Access the nested levels and FILL in the DATA in the above record seperately USING SELECT statements.
    */*SELECT 1 ---Find and fill Person*/*
    *select person_id,name from people where person_id =1*
    *-->From this SELECT fill the OUTER record PERSON_DETAILS[no. of records] -->person_id,name*
    */**SELECT 2 ---I want to FIND all TASKS for THE ABOVE PERSON and fill the next part */*
    *select task_id,name from tasks where person_id = PERSON_DETAILS[1].person_id*
    *We get three TASK_ids,*
    *-->From this, HOW TO fill DATA for each TASK,how to  PERSON_DETAILS[1]->TASKS_DETAILS[1st record].task_id -->person_id,name*
    */**SELECT 3 ---I want to FIND all ACTIVITIES for THE ABOVE TASKS and fill the next part */*
    *for EACH of the TASKS found,we need to LOOP and*
    *select activity_id,name from activities where task_id = PERSON_DETAILS[1].-->TASKS_DETAILS[1st record]--->ACTIVITY_DETAILS[1].activity_id*
    I have tried my best to explain with the tables and the relationships.I just hope its not confusing now.Edited by: user_7000011 on 01-Apr-2009 12:46

    try this one.
    Learned something new today.
    CREATE TABLE temp_clob_tab(result CLOB);
    CREATE or replace TYPE task_t AS OBJECT("@task_id"   NUMBER,
                                last_name        VARCHAR2(20),
                                 activitylist     activity_tab
    create or replace type activity_t as object("@activity_id" number, activity_name varchar2(20));
    CREATE or replace  TYPE tasklist_t AS TABLE OF task_t;
    create or replace type activity_tab as table of activity_t;
    CREATE or replace TYPE people_t AS OBJECT("@people_id" NUMBER,
                                 task_name  VARCHAR2(20),
                                 task_t          tasklist_t
    DECLARE
      qryCtx DBMS_XMLGEN.ctxHandle;
      result CLOB;
    BEGIN
      DBMS_XMLGEN.setRowTag(qryCtx, NULL);
      qryCtx := DBMS_XMLGEN.newContext
        ('SELECT people_t(person_id,
                        name,
                        CAST(MULTISET
                               (SELECT e.task_id, e.task_name,cast(multiset
                                        (select activity_id,activity_name from activities a
                                            where a.task_id = e.task_id) as activity_tab)
                                  FROM tasks e
                                  WHERE e.person_id= d.person_id)
                             AS tasklist_t))
            AS peoplexml
            FROM people d
            where person_id = 1');
      -- now get the result
      result := DBMS_XMLGEN.getXML(qryCtx);
      INSERT INTO temp_clob_tab VALUES (result);
      -- close context
      DBMS_XMLGEN.closeContext(qryCtx);
    END;
    select * from          temp_clob_tab ;output
    <?xml version="1.0"?>
    <ROWSET>
    <ROW>
      <PEOPLEXML people_id="1">
       <TASK_NAME>n1</TASK_NAME>
       <TASK_T>
        <TASK_T task_id="10">
         <LAST_NAME>cleaning</LAST_NAME>
         <ACTIVITYLIST>
          <ACTIVITY_T activity_id="100">
           <ACTIVITY_NAME>Clean home</ACTIVITY_NAME>
          </ACTIVITY_T>
          <ACTIVITY_T activity_id="200">
           <ACTIVITY_NAME>Clean Garden</ACTIVITY_NAME>
          </ACTIVITY_T>
          <ACTIVITY_T activity_id="300">
           <ACTIVITY_NAME>Clean clothes</ACTIVITY_NAME>
          </ACTIVITY_T>
         </ACTIVITYLIST>
        </TASK_T>
        <TASK_T task_id="20">
         <LAST_NAME>washing</LAST_NAME>
         <ACTIVITYLIST>
          <ACTIVITY_T activity_id="400">
           <ACTIVITY_NAME>Wash car</ACTIVITY_NAME>
          </ACTIVITY_T>
         </ACTIVITYLIST>
        </TASK_T>
       </TASK_T>
      </PEOPLEXML>
    </ROW>
    </ROWSET>

  • How to create multi-level style pulling in a .jpg image as a bullet?

    From within RoboHelp 8 HTML, when creating/editing a 3-tier multi-level style, I want to use a .jpg image for the bullet(s).  I can not find a way to point to the image while in Edit mode.  My only choices are predefined bullets for the List Style.
    When searching for an answer within the forum, I noticed mention of a Baggage folder in RoboHelp.  I do not have a Baggage folder.  I do have links to websites accessible from within the web-based Help file I've created beneath the URLs folder in the Project Manager.
    Thank you for any help you can provide.

    Hi there
    I never really played much with adding images to the oddly formatted Multi-list styles.
    The Project Manager has two views. Sounds like you are using the new "global" view. In that view you don't see a special area labeled Baggage Files. In this view the files are simply listed among the other content. If you change the view to Classic (I think it's the first icon on the left of the pod toolbar) you will then see the Baggage Files folder.
    Cheers... Rick
    Helpful and Handy Links
    RoboHelp Wish Form/Bug Reporting Form
    Begin learning RoboHelp HTML 7 or 8 within the day - $24.95!
    Adobe Certified RoboHelp HTML Training
    SorcerStone Blog
    RoboHelp eBooks

  • With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?

    With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?
    For example, in Notes, I have written three notes; however if I click on 'All On My Mac' on the side bar, I see about 10 different versions of each note I make, it saves a version every time I add or delete a sentence.
    I also noticed, that when I write an email, Mail saves about 10 or more draft versions before the final is sent.
    I understand that all this journaling provides a level of security, and prevents data lost; but I was wondering, is there a function to clean up journal logs once in a while?
    Thanks
    Roz

    Are you using Microsoft word?  Microsoft thinks the users are idiots. They put up a lot of pointless messages that annoy & worry users.  I have seen this message from Microsoft word.  It's annoying.
    As BDaqua points out...
    When you copy information via edit > copy,  command + c, edit > cut, or command +x, you place the information on the clipboard. When you paste information, edit > paste or command + v, you copy information from the clipboard to your data file.
    If you edit > cut or command + x and you do not paste the information and you quite Word, you could be loosing information.  Microsoft is very worried about this. When you quite Word, Microsoft checks if there is information on the clipboard & if so, Microsoft puts out this message.
    You should be saving your work more than once a day. I'd save every 5 minutes.  command + s does a save.
    Robert

  • My iMac running 10.7.5 crashes when copy and paste large amounts of information like a picture.

    My iMac running 10.7.5 crashes when I store a large amount of data like copy and paste a picture. It also has started being painfully slow to open the first page in Safari.  Here's is some information a program gathered on my computer.   -Thanks!
    Problem description:
    iMac 10.7.5.  Crashes when copy & paste large amounts and slow to first open first web page then back to normal speed
    EtreCheck version: 2.1.8 (121)
    Report generated April 13, 2015 3:05:27 PM EDT
    Download EtreCheck from http://etresoft.com/etrecheck
    Click the [Click for support] links for help with non-Apple products.
    Click the [Click for details] links for more information about that line.
    Hardware Information: ℹ️
        iMac (27-inch, Late 2009) (Technical Specifications)
        iMac - model: iMac10,1
        1 3.33 GHz Intel Core 2 Duo CPU: 2-core
        8 GB RAM
            BANK 0/DIMM0
                2 GB DDR3 1067 MHz ok
            BANK 1/DIMM0
                2 GB DDR3 1067 MHz ok
            BANK 0/DIMM1
                2 GB DDR3 1067 MHz ok
            BANK 1/DIMM1
                2 GB DDR3 1067 MHz ok
        Bluetooth: Old - Handoff/Airdrop2 not supported
        Wireless: Unknown
    Video Information: ℹ️
        ATI Radeon HD 4670 - VRAM: 256 MB
            iMac 2560 x 1440
    System Software: ℹ️
        Mac OS X 10.7.5 (11G63) - Time since boot: 14 days 7:47:18
    Disk Information: ℹ️
        ST31000528AS disk0 : (1 TB)
            disk0s1 (disk0s1) <not mounted> : 210 MB
            Macintosh HD (disk0s2) / : 999.35 GB (777.11 GB free)
            Recovery HD (disk0s3) <not mounted>  [Recovery]: 650 MB
        OPTIARC DVD RW AD-5680H 
    USB Information: ℹ️
        Sunplus Innovation Technology. USB to Serial-ATA bridge 1 TB
            disk2s1 (disk2s1) <not mounted> : 210 MB
            Time Machine Backups (disk2s2) /Volumes/Time Machine Backups : 999.86 GB (143.50 GB free)
        Apple Inc. Built-in iSight
        Apple Internal Memory Card Reader
        Apple Computer, Inc. IR Receiver
        HP Deskjet 9800
        Apple Inc. BRCM2046 Hub
            Apple Inc. Bluetooth USB Host Controller
    Kernel Extensions: ℹ️
            /Library/Extensions
        [loaded]    org.virtualbox.kext.VBoxDrv (4.2.18) [Click for support]
        [loaded]    org.virtualbox.kext.VBoxNetAdp (4.2.18) [Click for support]
        [loaded]    org.virtualbox.kext.VBoxNetFlt (4.2.18) [Click for support]
        [loaded]    org.virtualbox.kext.VBoxUSB (4.2.18) [Click for support]
            /Library/Parallels/Parallels Service.app
        [loaded]    com.parallels.kext.prl_hid_hook (7.0 15107.796624) [Click for support]
        [loaded]    com.parallels.kext.prl_hypervisor (7.0 15107.796624) [Click for support]
        [loaded]    com.parallels.kext.prl_netbridge (7.0 15107.796624) [Click for support]
        [loaded]    com.parallels.kext.prl_vnic (7.0 15107.796624) [Click for support]
            /System/Library/Extensions
        [loaded]    com.parallels.kext.prl_usb_connect (7.0 15107.796624) [Click for support]
    Startup Items: ℹ️
        HP IO: Path: /Library/StartupItems/HP IO
        ParallelsTransporter: Path: /Library/StartupItems/ParallelsTransporter
        VirtualBox: Path: /Library/StartupItems/VirtualBox
        Startup items are obsolete in OS X Yosemite
    Launch Agents: ℹ️
        [running]    com.hp.devicemonitor.plist [Click for support]
        [loaded]    com.hp.help.tocgenerator.plist [Click for support]
        [loaded]    com.parallels.desktop.launch.plist [Click for support]
        [loaded]    com.parallels.DesktopControlAgent.plist [Click for support]
        [running]    com.parallels.vm.prl_pcproxy.plist [Click for support]
    Launch Daemons: ℹ️
        [loaded]    com.adobe.fpsaud.plist [Click for support]
        [loaded]    com.hikvision.iVMS-4200.plist [Click for support]
        [loaded]    com.microsoft.office.licensing.helper.plist [Click for support]
        [running]    com.parallels.desktop.launchdaemon.plist [Click for support]
    User Launch Agents: ℹ️
        [loaded]    com.adobe.ARM.[...].plist [Click for support]
        [running]    com.akamai.single-user-client.plist [Click for support]
        [loaded]    com.google.keystone.agent.plist [Click for support]
        [not loaded]    org.virtualbox.vboxwebsrv.plist [Click for support]
    User Login Items: ℹ️
        iTunesHelper    UNKNOWN  (missing value)
        GrowlHelperApp    Application  (/Users/[redacted]/Library/PreferencePanes/Growl.prefPane/Contents/Resources/Gr owlHelperApp.app)
        Microsoft AU Daemon    Application  (/Applications/Microsoft AutoUpdate.app/Contents/MacOS/Microsoft AU Daemon.app)
        Air Mouse Server    UNKNOWN  (missing value)
        CrossOver CD Helper    UNKNOWN  (missing value)
        Pages    UNKNOWN  (missing value)
        Dropbox    Application  (/Applications/Dropbox.app)
        Wondershare Helper Compact    Application  (/Users/[redacted]/Library/Application Support/Helper/Wondershare Helper Compact.app)
        AdobeResourceSynchronizer    Application Hidden (/Applications/Adobe Reader.app/Contents/Support/AdobeResourceSynchronizer.app)
        HP Scheduler    Application  (/Library/Application Support/Hewlett-Packard/Software Update/HP Scheduler.app)
    Internet Plug-ins: ℹ️
        JavaAppletPlugin: Version: 14.9.0 - SDK 10.7 Check version
        FlashPlayer-10.6: Version: 16.0.0.305 - SDK 10.6 [Click for support]
        QuickTime Plugin: Version: 7.7.1
        AdobePDFViewerNPAPI: Version: 11.0.10 - SDK 10.6 [Click for support]
        Flash Player: Version: 16.0.0.305 - SDK 10.6 Outdated! Update
        AdobePDFViewer: Version: 11.0.10 - SDK 10.6 [Click for support]
        SharePointBrowserPlugin: Version: 14.4.8 - SDK 10.6 [Click for support]
        Google Earth Web Plug-in: Version: 6.0 [Click for support]
        Silverlight: Version: 4.0.60531.0 [Click for support]
        iPhotoPhotocast: Version: 7.0
    Safari Extensions: ℹ️
        1-ClickWeather
        Reload Button
    3rd Party Preference Panes: ℹ️
        Akamai NetSession Preferences  [Click for support]
        Flash Player  [Click for support]
        Growl  [Click for support]
        MacFUSE  [Click for support]
    Time Machine: ℹ️
        Time Machine not configured!
    Top Processes by CPU: ℹ️
             2%    WindowServer
             1%    prl_disp_service
             0%    fontd
             0%    AdobeReader
             0%    ODSAgent
    Top Processes by Memory: ℹ️
        120 MB    mds
        112 MB    AdobeReader
        94 MB    WindowServer
        94 MB    Finder
        69 MB    loginwindow
    Virtual Memory Information: ℹ️
        5.44 GB    Free RAM
        1.78 GB    Active RAM
        464 MB    Inactive RAM
        901 MB    Wired RAM
        1.64 GB    Page-ins
        0 B    Page-outs

    I believe that insufficient RAM may be the source of some of your problems. If you have a RAM of somewhere 4 to 8GB, you will experience smoother computing. 3GB doesn't seem right, so you might want to learn more by going to this site:
    http://www.crucial.com/store/drammemory.aspx
    I don't know what know what's happening with your optical drive, but it seems you use your drive quite a bit. In that case, look into a lens cleaner for your machine. It's inexpensive, works quite well.
    I hope you'll post here with your results!

  • Regarding TXT File data truncation due to large amount of data

    Hi Guys,
    I am downloading data to txt.file in background.I am getting truncation of the records due to large amount of data. If it is less data it works good.
    I have checked the Internal table SIZE for this and anywhy i have declared in OCCURS 0 only.
    So please help me to find out what may this reason.I am confuced is there any limitation for TXT file??
    Please help me guys..
    Thanks in advance..
    Prabhu.R

    Hi Rakesh,
    two ways.
    1. Ask ur BASIS team to increase the memory level.
    2. Check the PACKAGE SIZE option of select statement
    Here u  won't select all the data once but in packets of specified size. So get the packets of data and process.
    Just press F1 on package size. That explanation will be enough to proceed further.
    Thanks,
    Vinod.

  • Importing a large amount of text into InDesign (Newbie question!)

    Hello all,
    I am new to InDesign and would really appreciate some tips/advice on how best to transfer a large amount of text from Microsoft Word into an InDesign document. I have been tasked with creating a very simple pocket sized book, the content of which will be a series of business Acronyms (700+) and their defintions (just imagine a very simple dictionary). As you can imagine there is a lot of text in the Microsoft Word document so any advice on how to transfer it all as quickly as possible would be really helpful. Or, as the case may be, would it be simpler to just use another programme?
    Can you paste/import all - or large chunks of - a word document in one go? If I format a master page will Adobe InDesign automatically paste the text into the required number of pages? Or do i have to do it page by page from a clipboard?
    Obviously I am not expecting InDesign to format the text perfectly on each page, I just want to dump it all in one go and then go back and tidy it all up.
    Any help or advice would be greatly appreciated.
    Regards
    Matthew

    File > Place.... and navigate to the Word file. Show Import Options to allow you to preserve the Word Formatting, or dump it, or map styles to ne styles in your ID document.
    If you have a text  frame on your master page, put the loaded cursor over the frame area on your document page and watch for the cursor to change so it is surrounded by parentheses, then hold the Shift key and click. If you don't have a master frame (and most experienced users will tell you that more than 90% of the time a master text frame is more hindrence than help if you set your margins correctly), hold the Shift key and click where you want the top of the first column to start.
    In both cases, ID will add pages as required to place the entire document and thread the text through all the frames. If using master frames, they will be linked and overrridden onto the document page automatically, if not, your first frame will start where you click and extend down to the bottom margin and succeeding frames will fill the column guides (hence mention of setting your margins correctly).
    ID is a professional program and you really need to spend some time learning how it works. One good place to start is the Help files. Another is Sandee Cohen's Visual QuickStart Guide to InDesign. If you prefer video training, Lynda.com has good content.

  • Multi-level Purchase Order Release

    Hi All,
    Please help me on this.
    I have already tested in QAS multi-level release but the scenario is, there's only 1 Asst Manager(AM) before. So if AM release the PO/FO, then next would be the Manager and then MM Head.
    How to configure  in OMGS if scenario would be the following:
    Scenario 1
    Amount-> 600,00 Php
    Document Type -> Purchase Order
    Releaser would be AM(release code 08) then M(01) then MM Head(07)
    Scenario 2
    Amount-> 600,00 Php
    Document Type -> Framework Order
    Releaser would be AM(release code 09) then M(01) then MM Head(07)
    Thank you.
    Best regards,
    Jesielle

    >
    Jesielle Yanson wrote:
    > Scenario 1
    > Amount-> 600,00 Php
    > Document Type -> Purchase Order
    > Releaser would be AM(release code 08) then M(01) then MM Head(07)
    >
    > Scenario 2
    > Amount-> 600,00 Php
    > Document Type -> Framework Order
    > Releaser would be AM(release code 09) then M(01) then MM Head(07)
    > Jesielle
    hi, there ,
    if  only the PO type make the difference for your scenarios then you can use standard Charateristic 'FRG_EKKO_BSART' and define two release code given assigned into two release stratage in the same release group you are in ECC6.0
    regards

  • Generating large amounts of XML without running out of memory

    Hi there,
    I need some advice from the experienced xdb users around here. I´m trying to map large amounts of data inside the DB (Oracle 11.2.0.1.0) and by large I mean files up to several GB. I compared the "low level" mapping via PL/SQL in combination with ExtractValue/XMLQuery with the elegant XML View Mapping and the best performance gave me the View Mapping by using the XMLTABLE XQuery PATH constructs. So now I have a View that lies on several BINARY XMLTYPE Columns (where the XML files are stored) for the mapping and another view which lies above this Mapping View and constructs the nested XML result document via XMLELEMENT(),XMLAGG() etc. Example Code for better understanding:
    CREATE OR REPLACE VIEW MAPPING AS
    SELECT  type, (...)  FROM XMLTYPE_BINARY,  XMLTABLE ('/ROOT/ITEM' passing xml
         COLUMNS
          type       VARCHAR2(50)          PATH 'for $x in .
                                                                let $one := substring($x/b012,1,1)
                                                                let $two := substring($x/b012,1,2)
                                                                return
                                                                    if ($one eq "A")
                                                                      then "A"
                                                                    else if ($one eq "B" and not($two eq "BJ"))
                                                                      then "AA"
                                                                    else if (...)
    CREATE OR REPLACE VIEW RESULT AS
    select XMLELEMENT("RESULTDOC",
                     (SELECT XMLAGG(
                             XMLELEMENT("ITEM",
                                          XMLFOREST(
                                               type "ITEMTYPE",
    ) as RESULTDOC FROM MAPPING;
    ----------------------------------------------------------------------------------------------------------------------------Now all I want to do is materialize this document by inserting it into a XMLTYPE table/column.
    insert into bla select * from RESULT;
    Sounds pretty easy but can´t get it to work, the DB seems to load a full DOM representation into the RAM every time I perform a select, insert into or use the xmlgen tool. This Representation takes more than 1 GB for a 200 MB XML file and eventually I´m running out of memory with an
    ORA-19202: Error occurred in XML PROCESSING
    ORA-04030: out of process memory
    My question is how can I get the result document into the table without memory exhaustion. I thought the db would be smart enough to generate some kind of serialization/datastream to perform this task without loading everything into the RAM.
    Best regards

    The file import is performed via jdbc, clob and binary storage is possible up to several GB, the OR storage gives me the ORA-22813 when loading files with more than 100 MB. I use a plain prepared statement:
            File f = new File( path );
           PreparedStatement pstmt = CON.prepareStatement( "insert into " + table + " values ('" + id + "', XMLTYPE(?) )" );
           pstmt.setClob( 1, new FileReader(f) , (int)f.length() );
           pstmt.executeUpdate();
           pstmt.close(); DB version is 11.2.0.1.0 as mentioned in the initial post.
    But this isn´t my main problem, the above one is, I prefer using binary xmltype anyway, much easier to index. Anyone an idea how to get the large document from the view into a xmltype table?

  • Best way to handle large amount of text

    hello everyone
    My project involves handling large amount of text.(from
    conferences and
    reports)
    Most of them r in Ms Word. I can turn them into RTF format.
    I dont want to use scrolling. I prefer turning pages(next,
    previous, last,
    contents). which means I need to break them into chunks.
    Currently the process is awkward and slow.
    I know there wud b lots of people working on similar
    projects.
    Could anyone tell me an easy way to handle text. Bring them
    into cast and
    break them.
    any ideas would be appreciated
    thanx
    ahmed

    Hacking up a document with lingo will probably loose the rtf
    formatting
    information.
    Here's a bit of code to find the physical position of a given
    line of on
    screen text (counting returns is not accurate with word
    wrapped lines)
    This stragety uses charPosToLoc to get actual position for
    the text
    member's current width and font size
    maxHeight = 780 -- arbitrary display height limit
    T = member("sourceText").text
    repeat with i = 1 to T.line.count
    endChar = T.line[1..i].char.count
    lineEndlocV = charPosToLoc(member "sourceText",
    endChar).locV
    if lineEndlocV > maxHeight then -- fount "1 too many"
    line
    -- extract identified lines "sourceText"
    -- perhaps repeat parce with remaining part of "sourceText"
    singlePage = T.line[1..i - 1]
    member("sourceText").text = T.line[i..99999] -- put remaining
    text back
    into source text member
    If you want to use one of the roundabout ways to display pdf
    in
    director. There might be some batch pdf production tools that
    can create
    your pages in pretty scalable pdf format.
    I think flashpaper documents can be adapted to director.

  • Sending large amounts of data spontaneously

    In my normal experience with the internet connection, the bits of data sent is about 50 to 80% of that received, but occasionally Firefox starts transmitting large amounts of data spontaneously; what it is, I don't know and where it's going to, I don't know. For example, today the AT&T status screen showed about 19 MB received and about 10 MB sent after about an hour of on-line time. A few minutes later, I looked down at the status screen and it showed 19.5 MB received and 133.9 MB sent, and the number was steadily increasing. Just before I went on line today, I ran a complete scan of the computer with McAfee and it reported nothing needing attention. I ran the scan because a similar effusion of sending data spontaneously had happened yesterday. When I noticed the data pouring out today, I closed Firefox and it stopped. When I opened Firefox right afterward, the transmission of data from did not recommence. My first thought was that my computer had been captured by the bad guys and now I was a robot, but McAfee says not to worry. But should I worry anyway? What's going on, or that not having a good answer now, how can I find out what's going on? And how can I make it stop, unless I'm seeing some kind of maintenance operation that Mozilla or Microsoft is subjecting me to?

    Instead of using URLConnection open a Socket to the server port (80 probably) send a POST http request followed by the data, you may then (optional) recieve data from the server to check that the servlet is ok, this is the same protocol as URLConnection, but you have control over when the data is actually sent...
    Socket sock=new Socket(getHost(),80);
    DataOutputStream dos=new DataOutputStream(sock.getOutputStream());
    dos.writeBytes("POST servletname\r\n");
    dos.writeBytes("Content-type: text/plain\r\n");  //optional, but good if you know
    dos.writeBytes("Content-length: "+lengthOfData+"\r\n")  //again, optional, but good if you can know it without caching the data first
    dos.writeBytes("\r\n");   // gotta have a blank line before the data
      // send data now
    DataInputStream=new DataInputStream(sock.getInputStream());  //optional if you want to recieve
      // recieve any feedback from servlet if you want
    dis.close();
    dos.close();
    sock.close();im guessing that URLConnection caches the data so it can fill in "Content-length"

  • Pull large amounts of data using odata, client API and so takes a long time in project server 2013

    We are trying to pull large amounts of data in project server 2013 using both client API and odata calls, but it seem to take a long time. How is this done
    In project server 2010 we did this creating SQL views in both the reporting database and for list creating a view in the content database. Our IT dept is saying we can't do this anymore. How does a view in Project database or content database create issues?
    As long as we don't add a field in the table. So how's one to do this with creating a view?

    Hello,
    If you are using Project Server 2013 on premise I would recommend using T-SQL against the dbo. schema in the Project Web Database for your reports, this will be far quicker that the APIs. You can create custom objects in the dbo. schema, see the link below:
    https://msdn.microsoft.com/en-us/library/office/ee767687.aspx#pj15_Architecture_DAL
    It is not supported to query the SharePoint content database directly with T-SQL or add any custom objects to the content database.
    Paul
    Paul Mather | Twitter |
    http://pwmather.wordpress.com | CPS |
    MVP | Downloads

  • General practices - managing large amounts of data

    I have a basic question regarding data management in Java. Right now, I'm working on a program that deals with hundreds of "pages" of content, each with images, HTML, and some state data. I maintain info on each page in a ContentItem object. Similarly, there can be many categories of content, each of which I maintain in a ContentCategory object.
    At this point, I am controlling access to data using manager classes. For example, I have a ContentManager class that contains a global, static list of content and has the methods you'd expect such a class to have (getContent, addContent, deleteContent, etc.). Similarly, I have a CategoryManager class.
    Basically, any time my program has to deal with large amounts of data or objects, I sort of fall back on these manager classes. I'm wondering if this is a reasonable practice or not. If not, perhaps some of the more experienced developers here could recommend a design pattern that fits these situations.

    Thanks for the reply. I do have a sort of ad-hoc database that I'm using. I have a class called ContentReference which holds just the basic state information about a ContentItem. The actual HTML and image data are stored on disk and retrieved as needed. The files for each content item are just serialized copies of ContentItem objects. Each content item has a unique ID value which is passed to the ContentManager's getContent method. The getContent method retrieves the object from disk and returns the ContentItem object. The filenames for all the content items are based on the ID value, so the getContent method doesn't have to search through all the files until it finds the right one.
    In doing this, I don't have to keep all the HTML and image data in memory. Only the ContentReference objects are kept in memory. ContentItems are loaded as needed. It seems a little messy to have two objects that refer to the same thing, but I didn't see any other way of doing it.

  • FCC for Multi-level Hierarchy

    Hi Friends,
    Can someone please help me out with this.Below is my Sender Data Type which needs to be converted:
    DT_TRAC_MESG
    (Hierarchy 1)         TRAC_INFO
                      (Hierarchy 2)   TRAC00
                         (Hierarchy 2)GROUP1
                                    (Hierarchy 3)TRAC05
                                    (Hierarchy 3)GROUP2
                                                 (Hierarchy 4)TRAC10
    (Hierarchy 4)                                             GROUP3
                     (Hierarchy 5)                                         TRAC11
    and my input data is as below :
    XXXXXXXXX  TRAC00 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX (occurance 1)
    XXXXXXXXXXXXXXXXXXTRAC05XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX (occurance 1)
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXTRAC10XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXTRAC11XXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXTRAC11XXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXTRAC11XXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXTRAC10XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXTRAC11XXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXTRAC11XXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXTRAC10XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXTRAC11XXXXXXXXXXXXXXXXXXXXXXX
    I know Std SAP XI doesnt support more than 3 hierarchies but we are using a JMS adapter and this support more than 3 hierarchies.So please help us in achieving this mutli hierarchy conversion.Please atleast provide me the procedure to convert a 3 level Hierarchy structure.
    It would be of great help for me if someone can answer this and asap.
    Thanks & Regards,
    Pradeep.

    Hi
    follow this weblog
    File Content Conversion for Multi Hierarchical Structure
    or
    you can use seeburger to use multi level hierarchy
    SAP PI/XI : Content conversion using Generator mapping functionality of SeeBurger : Part 1.
    SAP PI/XI : Content Conversion using Generator mapping functionality of SeeBurger : Part 2.
    regards
    sandeep
    Edited by: sandeep sharma on Dec 15, 2008 10:45 AM

Maybe you are looking for

  • Upgrade Oracle 8.0.5 to 8.1.7

    Hi everyone we have SAP with Oracle 8.0.5 on Windows NT 4, we wish to upgrade to a newer version of SAP for which first of all we have to upgrade to Oracle 8.1.7, we used the procedure given in the SAP notes and Oracle docs and installed Oracle 8.1.7

  • How to create  the BOM for co product

    Dear friends, How to create the BOM for Co product  and How the production will create and  how the costing will capture for that. Regards, Sabhapathy R

  • Firefox home for the iPhone sync's initially but, it does not continue to sync after that. Is there something wrong with firefox sync or an issue with my app?

    Once firefox on the computer is closed for the first time after sync is setup it will not longer sync to the iPhone. I have tried deleteing and reading account several times. I refresh the app on iPhone. Nothing will work. One the initial sync is com

  • Attribute as internal table

    I'm developing a BSP application that uses a application class.  I created a table type in the data dictionary and used that as the type for an attribute of the class.  In one of the methods of the class I use SELECT statement to fill the attribute b

  • Wireless N WTF

    First off I'd just like to say thank you to Lenovo. Thank you for giving it to me right in the shorts. Why am I unable to upgrade my wireless card with a chipset specific upgrade from Intel? I'll tell you why, because it's not Lenovo branded, and the