Derived table query size limit 65KB?

Hi.  When in Edit Derived Table, if I paste a query larger than about 65KB, the end of the query will be cut off.  Then I must create multiple Derived Tables.  Has anyone else experienced this?  Do you find the limit to be about 65KB?
Thanks,
Mike

Most databases cut off at that limit, so it's not really a Designer limitation.
Frankly, if you have a derived table with 65K of SQL you need to fix your ETL.

Similar Messages

  • IR Query size limit

    Hi all,
    It seems that there is a limit on the size of the query that goes into Region Source. I have a query with characters(no spaces):24,476 and Charcaters(with spaces):33,069. I get page cannot be found error.
    I could do whatever possible in reducing the size by making use of the views. Is there any workaround?
    I see that in classic reports there is a workaround by using "PL/SQL function return a SQL query". I have to use IR in my case. Any solution/ideas?
    Environment: Apex 4.1.1 and Oracle 10g
    Thanks in advance
    cmovva

    You may be able to move more of your query into the view by using the 'v' function in the WHERE clause of the View as opposed to bind variables in the IR query. I would not normally suggest this as there are performance impacts and it locks the View to that page, however, it might be worth trying. To reduce the performance impacts of the 'v' function checkout this great blog post by Andy Tulley:
    http://atulley.wordpress.com/2011/02/07/using-dual-to-reduce-function-calls-and-getting-your-5-a-day/
    Cheers
    Shunt

  • "Convert Text to Table" Size limit issue?

    Alphabetize a List
    I’ve been using this well known work around for years.
    Select your list and in the Menu bar click Format>Table>Convert Text to Table
    Select one of the column’s cells (1st click selects entire table, 2nd click selects individual cell)
    Open “Table Inspector” (Click Table icon at top of Pages document)
    Make sure “table” button is selected, not “format” button
    Choose Sort Ascending from the Edit Rows & Columns pop-up menu
    Finally, click Format>Table>Convert Table to Text.
    A few days ago I added items & my list was 999 items long, ~22 pages.
    Tonight, I added 4 more items. Still the same # pages but now 1,003 items long.
    Unable to Convert Text to Table! Tried for 45 minutes. I think there is a list length limit, perhaps 999 items?
    I tried closing the document w/o any changes. Re-opening Pages & re-adding my new items to the end of the list as always & once again when I highlight list & Format>Table>Convert Text to Table .....nothing happens! I could highlight part of the list up to 999 items & leave the 4 new items unhighlighted & it works. I pasted the list into a new doc and copied a few items from the middle of the list & added them to the end of my new 999 list to make it 1003 items long (but different items) & did NOT work. I even attempted to add a single new item making the list an even 1000 items long & nope, not working. Even restarted iMac, no luck.
    I can get it to work with 999 or fewer items easily as always but no way when I add even a single new item.
    Anyone else have this problem?  It s/b easy to test out. If you have a list of say, 100 items, just copy & repeatedly paste into a new document multiple times to get over 1,000 & see if you can select all & then convert it from text to table.
    Thanks!
    Pages 08 v 3.03
    OS 10.6.8

    G,
    Yes, Pages has a table size limit, as you have discovered. Numbers has a much greater capacity for table length, so if you do your sort in Numbers you won't have any practical limitation.
    A better approach than switching to Numbers for the sort would be to download, install and activate Devon Wordservice. Then you could sort your list without converting it to a table.
    Jerry

  • Can we create prompts in Derived Table sql query

    Hi,
    I am trying to define derived table sql query as below. Though the syntax parsing is going through but when I am generating the report it is throwing and invalid prompt definition error.
    " select * from table_a where call_direction = @Prompt('Enter Call Direction','A','"derivedtable_a".SHORT_STRING',mono,free,not_persistent,) and
    where call_type = @Prompt('Enter Call Direction','A','"derivedtable_b".SHORT_STRING',mono,free,not_persistent,) "
    Can somebody please share your thoughts if this can be achieved in universe or not ?
    Appreciate immediate responses as it is a show stopper for my deliverable.
    Thanks in advance.
    Thanks and Regards,
    Shireen.

    Hi Shireen
    We can use prompt in the derived table. There is issue with the SQL that you have used while creating the Derived Table.
    Instead of using the "derivedtable_a".SHORT_STRING' field use the object from the class on which you are creating the derived table.
    For example: If you are creating a derived table in the sample universe efashion on Agg_yr_qt_rn_st_ln_ca_sr table then use the following query:
    SELECT *
    FROM Agg_yr_qt_rn_st_ln_ca_sr
    WHERE Agg_yr_qt_rn_st_ln_ca_sr.Yr = @Prompt ('Enter Value','C','Time Period\Year',mono,constrained)
    Hope this helps!
    Thanks

  • Select column in main query from sub query (derived table) -

    Hi:
    I have the following query - How do I select Tot_Atnd defined in the derived table
    SELECT  count(distinct c.prgm_id) AS "Total Completed", h.Tot_Atnd
    FROM  iplanrpt.vm_rpt_prgm  c
    INNER JOIN
    *(SELECT PRGM_ID, SUM(CASE WHEN ATTENDED_IND = 'Y' THEN 1 ELSE 0 END) AS "Tot_Atnd" FROM iPlanrpt.VM_RPT_PRGM_ATND GROUP BY PRGM_ID) h*
    ON c.PRGM_ID = h.PRGM_ID
    Thanks

    Here's an example of what I think the CREATE TABLE and INSERT statements would look like for your data:
    CREATE TABLE     vm_rpt_prgm
    (     prgm_id     NUMBER
    INSERT INTO     vm_rpt_prgm
    VALUES     (1);
    INSERT INTO     vm_rpt_prgm
    VALUES     (2);
    INSERT INTO     vm_rpt_prgm
    VALUES     (3);
    INSERT INTO     vm_rpt_prgm
    VALUES     (1);
    INSERT INTO     vm_rpt_prgm
    VALUES     (1);
    INSERT INTO     vm_rpt_prgm
    VALUES     (3);
    CREATE TABLE     vm_rpt_prgm_atnd
    (     prgm_id     NUMBER
    ,     attended_ind     CHAR(1)
    INSERT INTO     vm_prt_prgm_atnd
    VALUES     (1, 'Y');
    INSERT INTO     vm_prt_prgm_atnd
    VALUES     (1, 'N');
    INSERT INTO     vm_prt_prgm_atnd
    VALUES     (2, 'Y');
    INSERT INTO     vm_prt_prgm_atnd
    VALUES     (2, 'Y');
    INSERT INTO     vm_prt_prgm_atnd
    VALUES     (2, 'N');
    INSERT INTO     vm_prt_prgm_atnd
    VALUES     (3, 'Y');
    INSERT INTO     vm_prt_prgm_atnd
    VALUES     (3, 'N');
    INSERT INTO     vm_prt_prgm_atnd
    VALUES     (3, 'N');But, I don't know your data. The sample data should be a simplified case of reality that represents (as best as possible) the real data set. For example, in vm_rpt_prgm, can the same prgm_id show up in multiple records? If not, the sample data I provided above is not representative of your data. Similar questions for vm_rpt_prgm_atnd: Can the same prgm_id show up in multiple records? Are the values of attended_ind 'Y' or 'N'? Can there be prgm_id's that exist in this table, but not the other? If not, again, my sample data is not representative of yours, so adjust it as needed and re-post it.
    Here's some quick summary information on my sample data above:
    There are 3 distinct values of prgm_id in vm_rpt_prgm.
    There are 4 distinct values of prgm_id in vm_rpt_prgm_atnd.
    There are 6 rows/records in vm_rpt_prgm.
    There are 9 rows/records in vm_rpt_prgm_atnd.
    The count of all 'Y's in vm_rpt_prgm_atnd is 5.
    The count of all 'Y's in vm_rpt_prgm_atnd for only the prgm_id's in vm_rpt_prgm is 4.
    So, which of these things (or other things) do you want to see in your result set?
    You might say you want to see:
    COUNT_OF_UNIQUE_PRGMS     TOTAL_ATND_YS
    3                    4or...
    PRGM_ID     TOTAL_ATND_YS
    1          1
    2          2
    3          1
    4          1Once you clearly specify what you want, I can probably help you

  • In perfdatasource querying for global snapshot failed with error 'the size limit for this '

    I received  scom alerts from two win 2k8 r2 servers , hosting exchange 2010 mailbox roles , the alerts came almost in same time from both servers ,
    can I ignore those alerts
    or can someone give a me a clue how can I troubleshoot those alert , please any help would be appreciated
    In PerfDataSource, querying for Global Snapshot failed with error 'The size limit for this '
    from Ops-mgmt logs 
    Log Name:      Operations Manager
    Source:        Health Service Modules
    Date:          
    Event ID:      10104
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:       server 1
    Description:
    In PerfDataSource, querying for Global Snapshot failed with error 'The size limit for this ' 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.Windows.Server.2008.OperatingSystem.PercentMemoryUsed.Collection 
    Instance name: Microsoft Windows Server 2008 R2 Enterprise  
    Log Name:      Operations Manager
    Source:        Health Service Modules
    Date:          
    Event ID:      10104
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:       server 1
    Description:
    In PerfDataSource, querying for Global Snapshot failed with error 'The size limit for this ' 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.Windows.Server.2008.LogicalDisk.PercentIdle.Collection 
    Instance name:  " edb file path "
    Log Name:      Operations Manager
    Source:        Health Service Modules
    Date:          
    Event ID:      10104
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:       server 2 
    Description:
    In PerfDataSource, querying for Global Snapshot failed with error 'The size limit for this ' 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.Windows.Server.2008.NetworkAdapter.CurrentBandwidth.Collection 
    Log Name:      Operations Manager
    Source:        Health Service Modules
    Date:          
    Event ID:      10104
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:   server 2   
    Description:
    In PerfDataSource, querying for Global Snapshot failed with error 'The size limit for this ' 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.ForefrontProtection.FPE.Server.PerformanceCollection.RealtimeScanMessageRate

    Hi Blake , 
    Thanks for your reply , I appreciate your help  ,
    I didn't put the alert from scom console because they were same as the events ( same source )
    Health Service Modules, I didn't want to spam
    more :-)
    also the two servers encountered the issue were mailbox servers and part of same DAG , it worth mention the alert were resolved
    by Exchange 2010 Correlation Engine service 
    http://blogs.technet.com/b/kevinholman/archive/2010/10/15/clustering-the-exchange-2010-correlation-engine-service.aspx
    http://support.microsoft.com/kb/2592561
    also the Opsmgmt logs are full of waring and error event like 2023 , 21402 ,  21403 , 1207 !!
    Log Name:      Operations Manager
    Source:        HealthService
    Date:          
    Event ID:      2023
    Task Category: Health Service
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      server 1
    Description:
    The health service has removed some items from the send queue for management group "SCOM" since it exceeded the maximum allowed size of 15 megabytes.
    1- alert from console >>
    In PerfDataSource, querying for Global Snapshot failed with error 'The size limit for this '
    One or more workflows were affected by this.
    Workflow name: Microsoft.Windows.Server.2008.OperatingSystem.PercentMemoryUsed.Collection
    Instance name: Microsoft Windows Server 2008 R2 Enterprise 
    EventSourceName: Health Service Modules

  • Adding derived table  to-Universe issue!

    Hi,
    I am trying to add a derived table to my Universe by writing a SQL query, after some length of the query its not enabling me to write more.Could soemebody suggest is there any limit I can write upto or is there anyway I can increase the capacity of the length of the query.Thanks in Advance.

    How big is your query. I think there is a limit based on the edit box of about 32k but that is all
    It is possible to create derived tables based on other derived table using the @DerivedTable function. You might be able to break your query into smaller parts and build it that way
    Regards
    Alan

  • SQL Server 2008 XML Datatype variable size limit

    Can you please let me know the size limit for XML Data type variable in SQL Server 2008?
    I have read some where that the XML data type holds up to 2GB size. But, its not the case.
    We have defined a variable with XML data type and assigning the value by using SELECT statement FOR XML AUTO with in CTE and assigning the outout of CTE to this XML type variable. 
    When we limit the rows to 64 which has a length of 43370(used cast(@XMLvariable AS varchar(max)), the variable returns the XML. However, if i increase the rows from 64 to 65 which is length of 44048, then the variable returns with Blank value.
    Is there any LENGTH limit of the XML data type?
    Thanks in advance!!

    Hello,
    See MSDN xml (Transact-SQL), the size limit is 2 GB and it is working. If your XML data will be truncated, then because you are doing something wrong; but without knowing table design
    (DDL) and your query it's difficult to give you further assistance; so please provide more details.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • How do I use Derived Table to dynamically choose fact table

    How do I use the Derived Table functionality to dynamically choose a fact table?
    I am using BO XI R2 querying against Genesys Datamart kept in Oracle 10g.  The datamart contains aggregated fact tables at different levels (no_agg, hour, day, week, etc...) I would like to build my universe so that if the end user chooses a parameter to view reports at daily granularity, then the daily fact table is used;  choose hourly granularity, then hourly fact table is used, etc....
    I tried using dynamic SQL in Oracle Syntax, but Business Obljects Universe didn't like that type of coding.
    The tables look something like this:
    O_LOB1_NO_AGG o
    inner join V_LOB1_NO_AGG v on o.object_id = v.object_id
    inner join T_LOB1_NO_AGG t on v.timekey = t.timekey
    Likewise, in the 'hour', 'day', 'week', etc... fact tables, the Primary Key to Foreign Key names and relationships are the same.  And the columns in each O_, V_, T_ fact table is the same, or very similar (just aggregated at different levels of time).
    I was thinking of going a different route and using aggregate aware; but there are many Lines of Business (20+) and multiple time dimensions (7) and I believe aggregate aware would require me to place all relevant tables in the Universe as separate objects, which would create a large Universe with multiple table objects,  and not be maintenance-friendly. I also was going to dynamically choose Line of Business (LOB) in the derived tables, based on the end user choosing a parameter for LOB, but that is out-of-scope for my current question.  But that information sort of points you down the train of thought I am travelling. Thanks for any help you can provide!

    You can create a derived table containing a union like the following:
    select a,b,c from DailyFacts where (@prompt('View'....) = 'Daily' and (<rest of your where conditions here if necessary>)
    union
    (select a,b,c from MonthlyFacts where (@prompt('View'....) = 'Monthly' and (<rest of your where conditions here if necessary>))
    union
    (select a,b,c from YearlyFacts where (@prompt('View'....) = 'Yearly' and (<rest of your where conditions here if necessary>))
    I assume that you are familiar with the @prompt syntax
    Regards,
    Stratos

  • Problem with WebIntelligence and Universe Designer Derived Table

    Hi people, i have an issue with a report in WebIntelligence that i want to build. Here it goes:
    I created a derived table that brings every material that has or not any movement. The thing is that when I build the report using other information like Material Name, for example. The report filters by the coincidence between materials in the derived table and the SAP Standard table. I tried to modify the SQL query but, Oracle does not allow it.
    So here are my questions:
    1)Is any way to do a Left outer join in order to have any single material and do not allow WebIntelligence to do Inline views?
    2)Do i have to modify the derived table? and use the standard tables?
    3)Can i work with a derived table that does not have any join with the standard tables?
    Thanks in advance,
    Reynaldo

    If I understand you correctly, it sounds like you are getting an inner join where you want an outer join? You have several options:
    1. You can do an outer join in the universe, or even embedded in your derived table (if that is what you are trying to do)
    2. You can have a derived table that is not joined with any other tables in the Universe. But you will have to merge the dimensions in the Webi report, and then be sure to put the correct dimension(s) on the report in order to reflect the outer join you want.
    I hope that helps.

  • FILE and FTP Adapter file size limit

    Hi,
    Oracle SOA Suite ESB related:
    I see that there is a file size limit of 7MB for transferring using File and FTP adapter and that debatching can be used to overcome this issue. Also see that debatching can be done only for strucutred files.
    1) What can be done to transfer unstructured files larger than 7MB from one server to the other using FTP adapter?
    2) For structured files, could someone help me in debatching a file with the following structure.
    000|SEC-US-MF|1234|POPOC|679
    100|PO_226312|1234|7130667
    200|PO_226312|1234|Line_id_1
    300|Line_id_1|1234|Location_ID_1
    400|Location_ID_1|1234|Dist_ID_1
    100|PO_226355|1234|7136890
    200|PO_226355|1234|Line_id_2
    300|Line_id_2|1234|Location_ID_2
    400|Location_ID_2|1234|Dist_ID_2
    100|PO_226355|1234|7136890
    200|PO_226355|1234|Line_id_N
    300|Line_id_N|1234|Location_ID_N
    400|Location_ID_N|1234|Dist_ID_N
    999|SSS|1234|88|158
    I would need a the complete data in a single file at the destination for each file in the source. If there are as many number of files as the number of batches at the destination, I would need the file output file structure be as follows:
    000|SEC-US-MF|1234|POPOC|679
    100|PO_226312|1234|7130667
    200|PO_226312|1234|Line_id_1
    300|Line_id_1|1234|Location_ID_1
    400|Location_ID_1|1234|Dist_ID_1
    999|SSS|1234|88|158
    Thanks in advance,
    RV
    Edited by: user10236075 on May 25, 2009 4:12 PM
    Edited by: user10236075 on May 25, 2009 4:14 PM

    Ok Here are the steps
    1. Create an inbound file adapter as you normally would. The schema is opaque, set the polling as required.
    2. Create an outbound file adapter as you normally would, it doesn't really matter what xsd you use as you will modify the wsdl manually.
    3. Create a xsd that will read your file. This would typically be the xsd you would use for the inbound adapter. I call this address-csv.xsd.
    4. Create a xsd that is the desired output. This would typically be the xsd you would use for the outbound adapter. I have called this address-fixed-length.xsd. So I want to map csv to fixed length format.
    5. Create the xslt that will map between the 2 xsd. Do this in JDev, select the BPEL project, right-click -> New -> General -> XSL Map
    6. Edit the outbound file partner link wsdl, the the jca operations as the doc specifies, this is my example.
    <jca:binding  />
            <operation name="MoveWithXlate">
          <jca:operation
              InteractionSpec="oracle.tip.adapter.file.outbound.FileIoInteractionSpec"
              SourcePhysicalDirectory="foo1"
              SourceFileName="bar1"
              TargetPhysicalDirectory="C:\JDevOOW\jdev\FileIoOperationApps\MoveHugeFileWithXlate\out"
              TargetFileName="purchase_fixed.txt"
              SourceSchema="address-csv.xsd" 
              SourceSchemaRoot ="Root-Element"
              SourceType="native"
              TargetSchema="address-fixedLength.xsd" 
              TargetSchemaRoot ="Root-Element"
              TargetType="native"
              Xsl="addr1Toaddr2.xsl"
              Type="MOVE">
          </jca:operation> 7. Edit the outbound header to look as follows
        <types>
            <schema attributeFormDefault="qualified" elementFormDefault="qualified"
                    targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/file/"
                    xmlns="http://www.w3.org/2001/XMLSchema"
                    xmlns:FILEAPP="http://xmlns.oracle.com/pcbpel/adapter/file/">
                <element name="OutboundFileHeaderType">
                    <complexType>
                        <sequence>
                            <element name="fileName" type="string"/>
                            <element name="sourceDirectory" type="string"/>
                            <element name="sourceFileName" type="string"/>
                            <element name="targetDirectory" type="string"/>
                            <element name="targetFileName" type="string"/>                       
                        </sequence>
                    </complexType>
                </element> 
            </schema>
        </types>   8. the last trick is to have an assign between the inbound header to the outbound header partner link that copies the headers. You only need to copy the sourceDirectory and SourceGileName
        <assign name="Assign_Headers">
          <copy>
            <from variable="inboundHeader" part="inboundHeader"
                  query="/ns2:InboundFileHeaderType/ns2:fileName"/>
            <to variable="outboundHeader" part="outboundHeader"
                query="/ns2:OutboundFileHeaderType/ns2:sourceFileName"/>
          </copy>
          <copy>
            <from variable="inboundHeader" part="inboundHeader"
                  query="/ns2:InboundFileHeaderType/ns2:directory"/>
            <to variable="outboundHeader" part="outboundHeader"
                query="/ns2:OutboundFileHeaderType/ns2:sourceDirectory"/>
          </copy>
        </assign>you should be good to go. If you just want pass through then you don't need the native format set to opaque, with no XSLT
    cheers
    James

  • S1000 Data file size limit is reached in statement

    I am new to Java and was given the task to trouble shoot a java application that was written a few years ago and no longer supported. The java application creates database files the user's directory: diwdb.properties, diwdb.data, diwdb.lproperties, diwdb.script. The purpose of the application is to open a zip file and insert the files into a table in the database.
    The values that are populated in the diwdb.properties file are as follows:
    #HSQL Database Engine
    #Wed Jan 30 08:55:05 GMT 2013
    hsqldb.script_format=0
    runtime.gc_interval=0
    sql.enforce_strict_size=false
    hsqldb.cache_size_scale=8
    readonly=false
    hsqldb.nio_data_file=true
    hsqldb.cache_scale=14
    version=1.8.0
    hsqldb.default_table_type=memory
    hsqldb.cache_file_scale=1
    hsqldb.log_size=200
    modified=yes
    hsqldb.cache_version=1.7.0
    hsqldb.original_version=1.8.0
    hsqldb.compatible_version=1.8.0
    Once the databsae file gets to 2GB it brings up the error meessage 'S1000 Data file size limit is reached in statement (Insert into <tablename>......
    From searching on the itnernet it appeared that the parameter hsqldb.cache_file_scale needed to be increased & 8 was a suggested value.
    I have the distribution files (.jar & .jnlp) that are used to run the application. And I have a source directory that was found that contains java files. But I do not see any properties files to set any parameters. I was able to load both directories into NetBeans but really don't know if the files can be rebuilt for distribution as I'm not clear on what I'm doing and NetBeans shows errors in some of the directories.
    I have also tried to add parameters to the startup url: http://uknt117.uk.infores.com/DIW/DIW.jnlp?hsqldb.large_data=true?hsqldb.cache_file_scale=8 but that does not affect the application.
    I have been struggling with this for quite some time. Would greatly appreciate any assistance to help resolve this.
    Thanks!

    Thanks! But where would I run the sql statement. When anyone launches the application it creates the database files in their user directory. How would I connect to the database after that to execute the statement?
    I see the create table statements in the files I have pulled into NetBeans in both the source folder and the distribution folder. Could I add the statement there before the table is created in the jar file in the distribution folder and then re-compile it for distribution? OR would I need to add it to the file in source directory and recompile those to create a new distribution?
    Thanks!

  • Is there a size limit on internal HD? iBook G4 1Ghz

    Recently bought a used 12" iBook G4. Of course no installation discs came with it.
    When received it had a 30 Gb hard drive, 528 mb RAM, and 10.3.9.
    I wanted to max out RAM, increase HD size, and move to 10.4.x.
    (Additional background - I have various other Macs and can use Fire Wire Target Mode from any of them to the "new" iBook.)
    RAM upgrade successful. I borrowed install discs from a friend with a similar though not identical machine; his is a 1.33 Ghz. When I removed the old HD and replaced with a new 160 Gb drive, the Install discs did not see the HD. So I took the iBook apart again, put the new drive in an OWC external enclosure connected to my Powerbook G4. Saw that the drive needed formatting, duh, I did so. Put it back in the iBook. Started up from the install disc, it still did not see the HD. Started the iBook in Fire Wire Target mode; could not see the newly installed HD from the other machine either; all I could see on the iBook end of the Firewire was the CD/DVD. I have looked through various threads here related both to iBook HD replacement and to system upgrade concerns - nothing I find matches my issue.
    I have since reinstalled the original 30 Gb HD, have used the borrowed installation discs to do a clean install of 10.3.x (so the similar but not identical was close enough) and a retail 10.4 Install to upgrade the system. So, 2 out of 3; I have my expanded memory, a clean upgraded system. But I hate to give up on a bigger internal HD!!!
    1. Is there a size limit on the internal HD on the iBook? Is 160 Gb too large, and should I try again with a less aggressive increase? I do have a 60 Gb I could pull out of a PB G4 I will probably sell...
    2. If size is not the problem, it seems that maybe it would help if I somehow pre-install the system onto the 160Gb before I physically put it into the iBook. But I do not understand the whole process of making bootable backups etc. Is there a way I can copy my current installation, or do a reinstallation of 10.3.x or 10.4.x on the 160 Gb while is is external, connected via Firewire, and then swap the 160 Gb into the iBook and have it just boot up? I see hard drives for sale in eBay "with OS X 10.x.x already installed..."? I tried, both from my Powerbook and from the iBook to do an Install onto the 160 Gb while it was attached via Firewire. The PB didn't want to have anything to do with those iBook Install discs. The iBook wouldn't allow me the option of doing an Installation onto the External drive. And I assume that a simple Select-All-and-Copy isn't going to do it for me. So how do I install onto the drive while it is external, and will it work once I move it into the iBook?
    3. Probably not important, but out of curiosity... if I could use my Powerbook G4 Installation discs to burn a new system onto my 160 Gb drive (when it was connected as an external to the PB), and then put that drive into the iBook, would that work? Or would iBook vs. Powerbook differences throw off the installation?
    Thanks!
    Stan

    Shiftless:
    Is there a size limit on the internal HD on the iBook?
    The only limitation of HDD size is due to availability. The largest capacity ATA/IDE HDD available is 250 GB. Theoretically there is no limit to the capacity your computer will support, if larger capacity HDDs were available.
    If size is not the problem, it seems that maybe it would help if I somehow pre-install the system onto the 160Gb before I physically put it into the iBook.
    This is not necessary, although you could do it this way. I note in your later responses that you have already attempted this process and have some difficulties. Here is the procedure I would recommend, after the HDD is installed.
    • Clone the old internal HDD to an external firewire HDD. I gather you have done this.
    • Format and erase the newly installed HDD (directions follow).
    • Install new OS from disk or clone back from external HDD. (Post back for directions if you choose to install from this and then restore from backup).
    Formatting, Partitioning Erasing a Hard Disk Drive
    Warning! This procedure will destroy all data on your Hard Disk Drive. Be sure you have an up-to-date, tested backup of at least your Users folder and any third party applications you do not want to re-install before attempting this procedure.
    • With computer shut down insert install disk in optical drive.
    • Hit Power button and immediately after chime hold down the "C" key.
    • Select language
    • Go to the Utilities menu (Tiger) Installer menu (Panther & earlier) and launch Disk Utility.
    • Select your HDD (manufacturer ID) in left side bar.
    • Select Partition tab in main panel. (You are about to create a single partition volume.)
    • Click on Options button
    • Select Apple Partition Map (PPC Macs) or GUID Partition Table (Intel Macs)
    • Click OK
    • Select number of partition in pull-down menu above Volume diagram.
    (Note 1: One partition is normally preferable for an internal HDD.)
    • Type in name in Name field (usually Macintosh HD)
    • Select Volume Format as Mac OS Extended (Journaled)
    • Click Partition button at bottom of panel.
    • Select Erase tab
    • Select the sub-volume (indented) under Manufacturer ID (usually Macintosh HD).
    • Check to be sure your Volume Name and Volume Format are correct.
    • Click Erase button
    • Quit Disk Utility.
    cornelius

  • Derived table in univ's

    Hello,
    Plz Ans me-
    What is Derived table, where it is using derived table? advantages& Di- advantages derived table?

    Hi,
    Derived tables are nothing else but InLine views (with the one additional benefit of being able to use @prompt syntax in a derived table) and as such do not contain any data, everything is calculated on the fly during query execution, meaning: whenever you refresh a report.
    Derived tables are tables that you define in the universe schema. You create objects on them as you do with any other table. A derived table is defined by an SQL query at the universe level that can be used as a logical table in Designer.
    Derived tables have the following advantages:
    u2022 Reduced amount of data returned to the document for analysis. You can include complex calculations and functions in a derived table. These operations are performed before the result set is returned to a document, which saves time and reduces the need for complex analysis of large amounts of data at the report level.
    u2022 Reduced maintenance of database summary tables. Derived tables can, in some cases, replace statistical tables that hold results for complex calculations that are incorporated into the universe using aggregate awareness. These aggregrate tables are costly to maintain and refresh frequently. Derived tables can return the same data and provide real time data analysis.
    Derived tables are similar to database views, with the advantage that the SQL for a derived table can include BusinessObjects prompts.
    Thanks,
    Amit

  • IDT 4.1 - Derived Table Question

    Hello,
    I working on a project and I have hit a road block,  need your help...
    Scenario:  We have a Derived Table with multiple @prompts and I need to make this optional so that if the user does not answer any of the prompts the query should bring back results.  But I tried already tried using 'optional' in the @prompt syntax like below, this makes the prompts NOT mandatory but when user does not answer these prompts we DON'T get any results...
    @Prompt('Parameter Name','A',LOV,Multi,Constrained,Not_Persistent,,optional)
    Any ideas and thoughts are welcome !!
    Warm Regards,
    Manohar Singh

    This is how I I would go about debugging the issue, then.
    I would temporarily pull the result of the prompt into a column of the derived table (we'll call it derived table A), and then add a dimension which displays the value of that column. 
    Then I would add some sort of outer join (which generates a Cartesian product perhaps -- doesn't matter for debugging, though you may want to choose a small table!) between derived table A and another table (we'll call it 'table B').  Then create another dimension on 'table B'.
    Then I would run a query using the dimension from derived table A and the dimension from table B.  The outer join should always give a result and allow you to see what BO is actually substituting for the empty prompt value. 
    If you transform the empty prompt value somehow in one of your subqueries, you could take the transformed value and turn it into another dimension on derived table A.  Take it one step at a time this way and you'll eventually understand how the pieces are being put together.
    Good luck!

Maybe you are looking for

  • Contacts Tab Is Missing

    I cannot sync my Outlook contacts with my iPod. If you go into iTunes Preferences and then click on the iPod tab, there is no tab beneath it for Contacts. I have a 60GB iPod photo, and it clearly shows on the iPod that the Contacts feature does exist

  • Defining Approvers(Managers) in ERM

    Dear All, I am designing a workflow for adding tcode/authorization object. The workflow is as follows:- Requestor>Security>Manager-->Provisioning In CUP: Requestor --> Security Admin.( now security admin will the hold the request in CUP) In ERM: Secu

  • Unable to import Schema in Console

    Hey Guys I m trying to create repository in Console via XSD.I have copied the XSD from PI system and stored on my local desktop(MDM server as well as console are on my dekstop itself). Now when i try to import this in Console via MDMServer->Create re

  • Problem in saving the Rating and Discussion?

    hi all,        I am using EP6 SP16.       I have enabled Rating service. But when I give rating to any document or folder by going to details --> collaboration --> Rate this document. Then when I click on Save button to save the Rating, it gives the

  • How to write to stdout and read from stdin using unix pipe = '|'

    Hi, How can I read the output written by 1 program to stdout as the input (from stdin) to another program? Example: $java pgm1 | pgm2 pgm1's output is of multiple lines with tab delimited words in each line. I need to know when each line ends as well