Data Federator XI 3.0 using DB2 VARCHAR FOR BIT DATA Column?

We have a column in a DB2 database that is defined as VARCHAR(16) FOR 
BIT DATA.
We are using the suggested IBM JDBC driver, db2jcc.jar, against a DB2 
OS/390 8.1.5 version database.
The Datasource column displays a data type of NULL, indicating the DF 
does not understand or cannot handle this IBM data type.
We have two issues.
First, target tables are not able to return any columns, regardless if 
we exclude columns defined as NULL as mentioned above. We see the 
'Wait' animation for a very long time when we use the 'Target table 
test tool' option. Selecting to display the count only, returns zero.
We are able to fetch and view non-NULL column data when using the 
'Query tool' under the Datasource pane.
I also get the same result when using the 'My Query Tool' in Server 
Administrator; a selection agains the sources returns data while 
selecting from a target table returns no data. Also, a 'select 
count(*)' returns zero.
The second issue is in mapping a relationship between two DB2 tables 
where the join is between two columns of the above mentioned type 
(NULL).
The error we get back when we use "Show Errors" is "The types 
'NULL' (in 'S1.PLANNEDGOALID') and 'NULL' (in 'S2.PLANNDEDGOALID') are 
not compatible.". When reviewing the relationship, a dashed red line 
appears instead of a solid grey line between the two tables in the 
"Table relationships and pre-filers" section of our mapping pane.
The following query returns an error via the Server Administrator 
Query Tool; "Types 'NULL' and 'NULL' are not compatible for operator 
'=' (Error code : 10248)".
select count(*)
from
(select s1.CASEID, s2.PLANNEDGOALID, s2.NAME, s2.PLANNEDGRPSTTYCD
  from "/DF_CMS_ODS/sources/CMFSREPT/CMSPROD.PLANNEDGOAL" AS s1
,"/DF_CMS_ODS/sources/CMFSREPT/CMSPROD.PLANNEDGOAL" s2
          where s1.PLANNEDGOALID = s2.PLANNEDGOALID)
Here are the properties settings in the Resource Connector Settings 
for jdbc.db2.zSeries we are using.
capabilities: isjdbc=true;orderBy=false
driverLocation: drivers/db2jcc_license_cisuz.jar;drivers/db2jcc.jar
jdbcClass: com.ibm.db2.jcc.DB2Driver
sourceType: db2
supportsCatalog: no
urlTemplate: jdbc:db2://<hostname>[:<port>]/<databasename>
Here are the Connection parameters as defined for the datasource in DF 
Designer.
Defined resource: jdbc.db2.zSeries
Jdbc connection URL: jdbc:db2://DB2D03:50000/CMFSREPT
Authentication: Use a specific database logon for all Data Federator 
users.
User Name: x
Password: hidden
Login domain: -- Choose a defined login domain --
Supports Schema: checked
Schema: is empty
Prefix table names with schema name: checked
Supports catalog: unchecked
Prefix table names with the database name: unchecked
Table types: TABLE and VIEW
So, the following is the two questions we require answers for...
Is this a limitation of Data Federator?
Is there a work around short of changing the datatype in the database.

Hi Darren,
The VARCHAR() FOR BIT DATA is a binary data type and Data Federator does not support binaries. But if in your case, it makes sense to map this column to a VARCHAR data type you can configure the DB2 connector to view this column as a VARCHAR.
Your column can be mapped explicitly to a data type of your choice using a property: castColumnType.
This property can be set updating the resource you selected when you registered you DB2 data source.
If the resource is "jdbc.db2", then:
1. Launch Data Federator Administrator
2. Click on "Administration" tab
3. Click on "Connector Settings"
4. Select the right resource: "jdbc.db2"
5. Click "Add a property"
6. Select "castColumnType"
7. Set its value to: VARCHAR() FOR BIT DATA=VARCHAR
8. Click on Ok
You should see this column as a VARCHAR.
Regards,
Mokrane
PS: For the target table issue, we have forwarded your mail to the Data Federator Designer team.

Similar Messages

  • Using Alias for the table column

    Dear all,
       Can we use alias name for the table column while developing zreport.
    ie.,
      without using standard  table column name ( like mara-matnr ), i wish to use mat_num for matnr. So that, one can easily understand the column.
    Could you help me out in this regard.
    Thanks in Advance,
    S.Sridhar.

    yes you can declare in ur internal table like
    material_number like mara-matnr.

  • Problem creating dataset using db2 stored procedure in Eclipse BIRT

    Hi,
    I am using DB2 9.7 Express Edition in Eclipse BIRT(version 2.5.1) for generating reports. I have used Type4 driver for jdbc connection.
    For that, I have established jdbc connection using db2jcc.jar and db2jcc_license_cu.jar files.
    I have successfully created data source, say DB2BIRT having following requisites-
    Driver Class - com.ibm.db2.jcc.DB2Driver ( v3.50)
    Driver URL - jdbc:db2://localhost:50000/database_name
    User name - user_name
    Password - Password
    I have written some stored procedures and trying to use resultsets from those stored procedures into my report..
    The stored procedures having involvement of only single resultset are working absolutely fine for new dataset using above DB2BIRT.
    But, I am unable to create new dataset using stored procedures those having involvement of multiple resultsets.
    I am getting following error as -
    org.eclipse.birt.data.engine.odaconsumer.PreparedStatement$SequentialResultSetHandler getMoreResults
    SEVERE: Cannot get more result sets from the statement.
    Cannot get the result set.
    SQL error #1: [jcc][10120][10943][3.50.152] Invalid operation: statement is closed. ERRORCODE=-4470, SQLSTATE=null
    org.eclipse.birt.report.data.oda.jdbc.JDBCException: Cannot get the result set.
    SQL error #1: [jcc][10120][10943][3.50.152] Invalid operation: statement is closed. ERRORCODE=-4470, SQLSTATE=null
    com.ibm.db2.jcc.b.SqlException: [jcc][10120][10943][3.50.152] Invalid operation: statement is closed. ERRORCODE=-4470, SQLSTATE=null
    at org.eclipse.birt.report.data.oda.jdbc.CallStatement.getMoreResults(CallStatement.java:1760)
    at org.eclipse.datatools.connectivity.oda.consumer.helper.OdaAdvancedQuery.getMoreResults(OdaAdvancedQuery.java:214)
    at org.eclipse.birt.data.engine.odaconsumer.PreparedStatement$SequentialResultSetHandler.getMoreResults(PreparedStatement.java:5183)
    at org.eclipse.birt.data.engine.odaconsumer.PreparedStatement.getMoreResults(PreparedStatement.java:792)
    at org.eclipse.birt.data.engine.odaconsumer.PreparedStatement.flushResultSets(PreparedStatement.java:1009)
    at org.eclipse.birt.data.engine.odaconsumer.PreparedStatement.close(PreparedStatement.java:980)
    at org.eclipse.birt.data.engine.executor.DataSource$DataSourceReleaser.run(DataSource.java:374)
    at java.lang.Thread.run(Thread.java:619)
    Caused by: com.ibm.db2.jcc.b.SqlException: [jcc][10120][10943][3.50.152] Invalid operation: statement is closed. ERRORCODE=-4470, SQLSTATE=null
    at com.ibm.db2.jcc.b.wc.a(wc.java:55)
    at com.ibm.db2.jcc.b.wc.a(wc.java:102)
    at com.ibm.db2.jcc.b.tk.db(tk.java:3118)
    at com.ibm.db2.jcc.b.tk.a(tk.java:1063)
    at com.ibm.db2.jcc.b.tk.getMoreResults(tk.java:908)
    at org.eclipse.birt.report.data.oda.jdbc.CallStatement.getMoreResults(CallStatement.java:1756)
    ... 7 more
    Moreover, I tried to resolve above issue by changing Driver Class from com.ibm.db2.jcc.DB2Driver ( v3.50) to com.ibm.db2.jcc.uw.DB2StoredProcDriver ( v3.50)
    So again while *"Test Connection"* for new Data source using this new driver class for stored procedure, there is an error reflection as -
    org.eclipse.birt.report.data.oda.jdbc.JDBCException: The selected driver cannot parse the given url.
    at org.eclipse.birt.report.data.oda.jdbc.JDBCDriverManager.testConnection(JDBCDriverManager.java:627)
    at org.eclipse.birt.report.data.oda.jdbc.ui.util.DriverLoader.testConnection(DriverLoader.java:120)
    at org.eclipse.birt.report.data.oda.jdbc.ui.util.DriverLoader.testConnection(DriverLoader.java:133)
    at org.eclipse.birt.report.data.oda.jdbc.ui.profile.JDBCSelectionPageHelper.testConnection(JDBCSelectionPageHelper.java:653)
    at org.eclipse.birt.report.data.oda.jdbc.ui.profile.JDBCSelectionPageHelper.access$7(JDBCSelectionPageHelper.java:627)
    at org.eclipse.birt.report.data.oda.jdbc.ui.profile.JDBCSelectionPageHelper$7.widgetSelected(JDBCSelectionPageHelper.java:549)
    at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:228)
    at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)
    at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1176)
    at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3493)
    at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3112)
    at org.eclipse.jface.window.Window.runEventLoop(Window.java:825)
    at org.eclipse.jface.window.Window.open(Window.java:801)
    at org.eclipse.birt.report.designer.ui.dialogs.BaseDialog.open(BaseDialog.java:110)
    at org.eclipse.birt.report.designer.data.ui.actions.EditDataSourceAction.doAction(EditDataSourceAction.java:68)
    at org.eclipse.birt.report.designer.internal.ui.views.actions.AbstractElementAction.run(AbstractElementAction.java:70)
    at org.eclipse.jface.action.Action.runWithEvent(Action.java:498)
    at org.eclipse.jface.action.ActionContributionItem.handleWidgetSelection(ActionContributionItem.java:584)
    at org.eclipse.jface.action.ActionContributionItem.access$2(ActionContributionItem.java:501)
    at org.eclipse.jface.action.ActionContributionItem$5.handleEvent(ActionContributionItem.java:411)
    at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)
    at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1176)
    at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3493)
    at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3112)
    at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2405)
    at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2369)
    at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2221)
    at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:500)
    at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
    at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:493)
    at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149)
    at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:113)
    at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:194)
    at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110)
    at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:368)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559)
    at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514)
    at org.eclipse.equinox.launcher.Main.run(Main.java:1311)
    Can anybody address my this issue for successful implementation of stored procedure(with involvement of multiple resultsets) for creating Data set in Eclipse BIRT ?
    I will be really thankful.
    Thanks in advance,
    Manasi

    Well, in my stored procedure I have used 2 to3 cursors(as per my business logic) and all cursors except one are holding result sets. That exceptional cursor is intended for holding as well as returning result set after call to the procedure. And its perfectly running on db2 and returning the desired output. The problem is with Eclipse. The same procedure is not working in Eclipse BIRT.

  • Redirected restore - automatic storage - using DB2 api not br tools

    I've done 100+ redirected restores for non-automatic storage databases using TSM only using the DB2 api's  - and not the BR tools . 
    Now, I'm challenged with a redirect automatic storage DB.
    *SQL1277N  Restore has detected that one or more table space containers are*
    inaccessible, or has set their state to 'storage must be defined'.
    DB20000I  The RESTORE DATABASE command completed successfully.
    The backup image gets read and the TAG's for the containers are build.  I can see that by doing a "df" on the filesystem level.  But as soon as the restores attempts to write data into the containers, I get the above error message.
    Here are my steps: 
    1. offline backup using db2 api on source
    2. create restore script
    RESTORE DB DCC use adsm open 4 sessions TAKEN AT 20080722155529 on /db2/SCC/sapdata1,/db2/SCC/sapdata2,/db2/SCC/sapdata3,/db2/SCC/sapdata4 ,/db2/SCC/sapdata5,/db2/SCC/sapdata6,/db2/SCC/saptemp1 dbpath on /db2/SCC INTO SCC NEWLOGPATH /db2/SCC/log_dir with 16 buffers REPLACE EXISTING redirect parallelism 2 without rolling forward without prompting
    3. drop the target db and clean out all sapdata's and saptemp
    4. create db on target:db2 create database scc automatic storage yes on /db2/SCC/sapdata1,/db2/SCC/sapdata2,/db2/SCC/sa**
    +pdata3,/db2/SCC/sapdata4,/db2/SCC/sapdata5,/db2/SCC/saptemp1 dbpath on /db2/SCC+
    5. update target db cfg with tsm parms so that it will be able to read the TSM tapes.  ie tsm_mgmtclass, tsm_nodename, tsm_owner, tsm_password
    6.  setup my dsm.opt with
    +SErvername       redirect_restore+
    and the dsm.sys with a new stanza
    SErvername  redirectrestore_
       SchedMode            PROMPT
       nodename             useagan1743ddb2_
       PasswordAccess       prompt
       COMMMethod         TCPip
       TCPPort            1500
       TCPServeraddress     unixtsm1p.ecolab.com 
    7.  as root I run the /db2/db2<sid>/sqllib/adsm/dsmapipw password to make sure to have access to the tsm nodename.
    8.  execute restore script on target
    RESTORE DB DCC use adsm open 4 sessions TAKEN AT 20080722155529 on /db2/SCC/sapdata1,/db2/SCC/sa
    pdata2,/db2/SCC/sapdata3,/db2/SCC/sapdata4 ,/db2/SCC/sapdata5,/db2/SCC/sapdata6,/db2/SCC/saptemp
    1 dbpath on /db2/SCC INTO SCC NEWLOGPATH /db2/SCC/log_dir with 16 buffers REPLACE EXISTING redir
    ect parallelism 2 without rolling forward without prompting;

    Hello Anke,
    Looks like your create database statement is missing the collating sequence, codeset etc.  Here is a database create statement I got from my sapinst for your reference:
    create database SX1
      automatic storage yes on /db2/SX1/sapdata1, /db2/SX1/sapdata2, /db2/SX1/sapdat
    a3, /db2/SX1/sapdata4
      dbpath on /db2/SX1
      using codeset UTF-8
      territory en_US
      collate using IDENTITY_16BIT
      pagesize 16 k
      dft_extent_sz 2
      catalog tablespace managed by automatic storage extentsize 2
      with 'SAP database SX1';
    Regards,

  • SQL0904N and SQL30081N while using DB2-Connect driver

    Scenario:
    Application program : Java on Unix
    Database : DB2 on Mainframe
    Database driver : IBM's DB2 connect
    Usage : A number of connections are taken by the program and are used as long as the program is alive
    (Life time of the program is variable. It can be as long as many days -- in a sense the program acts like a daemon process that runs as long it is not stopped).
    Scenario:
    -To one of the table using a load utility, such as BMC, data is loaded
    -Database goes to copy-pending status
    -Database is started by using the following command : "START DATABASE(database name) SPACENAM(table space name) ACCESS(FORCE)"
    Doubts:
    1. What happens to the connections that the program has opened to the database?
    2. Does the database relinquish all the resources (like the sockets it would have opened for the communication with the driver) held by it during restarting?
    (at least before restarting, it is evident that no query on the table in question is possible. A typical SQL0904N, which is non-availability of resources, is thrown)
    3. Can the above scenario cause a SQL30081N, which is resetting of connection by peer i.e. database in this case?

    Yes, there is a high probability for such things, you may want to consider using a connection pool instead of opening 2 connections once at the start of the program and using them all for days.
    Have a look at the following link to get an idea on how to implement a simple connection pool.
    http://developer.java.sun.com/developer/onlineTraining/Programming/JDCBook/conpool.html

  • How to auto update date column without using trigger

    Hi,
    How to write below MYSQL query in Oracle 10g :
    CREATE TABLE example_timestamp (
    Id number(10) NOT NULL PRIMARY KEY,
    Data VARCHAR(100),
         Date_time TIMESTAMP DEFAULT current_timestamp on update current_timestamp
    I need to auto update the Date_Time column without using trigger when ever i update a record.
    Example shown below is from MYSQL. I want to perform the below steps in ORACLE to auto update Date_Time column.
    mysql> INSERT INTO example_timestamp (data)
    VALUES ('The time of creation is:');
    mysql> SELECT * FROM example_timestamp;
    | id | data | Date_Time |
    | 1 | The time of creation is: | 2012-06-28 12:37:22 |
    mysql> UPDATE example_timestamp
    SET data='The current timestamp is: '
    WHERE id=1;
    mysql> SELECT * FROM example_timestamp;
    | id | data | Date_Time |
    | 1 | The current timestamp is: | 2012-06-28 12:38:55 |
    Regards,
    Yogesh.

    Is there no functionality in oracle to auto update date column without using trigger??
    I dont want to update the date column in UPDATE statement.
    The date column should automatically get updated when ever i execute an Update statement.

  • Define BINARY DATA column (DB2 to Oracle migration)

    Have a table where a column is defined as 'FOR BIT DATA'. This specifies that the contents of the column are to be treated as bit (binary) data. During data exchange with other systems, code page conversions are not performed. Comparisons are done in binary, irrespective of the database collating sequence.
    During conversion to Oracle using Oracle Migration Workbench, this table is converted irrespective of this definition. Please see below for details:
    DB2 table definition:
    CREATE TABLE ADMIN.XENCY (
              ID INTEGER NOT NULL ,
              XENCYSYMBOL CHAR(2) FOR BIT DATA NOT NULL ,
              IN TS_XENCY ;
    Oracle table definition as generated by OMWB:
    CREATE TABLE ADMIN.XENCY
    ( ID NUMBER(10,0) NOT NULL ENABLE,
    XENCYSYMBOL CHAR(2) NOT NULL ENABLE,
    ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE TS_XENCY;
    During data loading from DB2 to Oracle it fails for this table with the following error:
    Type: Error
    Time: 14-08-2006 18:21:02
    Phase: Migrating
    Message: Unable to migrate data from source table ADMIN.XENCY to destination table ADMIN.XENCY : ADMIN.XENCY; ORA-12899: value too large for column "ADMIN"."XENCY"."XENCYSYMBOL" (actual: 4, maximum: 2)
    For Oracle the length of the data in this column is 4; however for DB2 it is 2 only as shown below:
    $ db2 "select max(length(Xencysymbol)) from admin.Xency with ur"
    1
    2
    1 record(s) selected.
    $     
    The distinct contents of this table is as follows:
    $ db2 "select distinct Xencysymbol from admin.Xency with ur"
    XENCYSYMBOL
    x'0024'
    x'2020'
    2 record(s) selected.
    $
    How do I define a column as containing binary data in Oracle?
    Thanks in advance.

    The MWB DB2 LUW plug-in incorrectly translates CHAR FOR BIT. Doesn't particularly pay attention to any of the CHAR modifiers. It will just see that column as a CHAR. Obviously, this is a defect.
    Binary data of limited length is stored in RAW type columns in Oracle DB.
    You have two problems (in addition to issuing an ALTER COLUMN to fix the type delcaration or dropping, tweaking the table creation script and reloading. )
    1. The data extraction scripts utilized for that table have not properly encoded the data for transport between DB systems. If there are endian differences between he source and target platform there may be additional problems. Nominally, you can export the binary data in hex format.
    2. The SQL*Loader file needs to be adjusted. (if you dump in hex format, so the data in the file is characters of a hex value )
    XENCYSYMBOL CHAR : hextoraw( : XENCYSYMBOL)
    Loader will convert those hex characters back into RAW as it loads.
    [ The  built in data pump will not work obviously because the system thinks it is a CHAR. So offline data transfer is your only option.  ]

  • If you use DB2, send me your pool definition

    Hi. If you use DB2 with weblogic, please send me your pool definition.
    I want to make your pool creation as automatic as we can do it.
    thanks
    Joe

    No, it's not just for convenience.  The device needs to be restored as new, so that you can use it properly.
    iOS: How to back up your data and set up your device as a new device - http://support.apple.com/kb/ht4137

  • Using MODEL clause and COUNT for not numeric data columns....

    Hi ,
    Is it possible somehow to use the COUNT function to transform a non-numeric data column to a numeric data value (a counter) and be used in a MODEL clause....????
    For example , i tried the following in the emp table of SCOTT dataschema with no desired result...
    SQL> select deptno , empno , hiredate from emp;
    DEPTNO EMPNO HIREDATE
        20  7369 18/12/1980
        30  7499 20/02/1981
        30  7521 22/02/1981
        20  7566 02/04/1981
        30  7654 28/09/1981
        30  7698 01/05/1981
        10  7782 09/06/1981
        20  7788 18/04/1987
        10  7839 17/11/1981
        30  7844 08/09/1981
        20  7876 21/05/1987
        30  7900 03/12/1981
        20  7902 03/12/1981
        10  7934 23/01/1982
    14 rows selected Now , i want to use the MODEL clause in order to 'predict' the number of employees who were going to be hired in the 1990 per deptno...
    So , i have constructed the following query which , as expected, does not return the desired results....
    SQL>   select deptno , month , year , count_
      2    from
      3    (
      4    select deptno , to_number(to_char(hiredate,'mm')) month ,
      5                to_number(to_char(hiredate , 'rrrr')) year , count(ename) count_
      6    from emp
      7    group by  deptno , to_number(to_char(hiredate,'mm'))  ,
      8                to_number(to_char(hiredate , 'rrrr'))
      9    )
    10    model
    11    partition by(deptno)
    12    dimension by (month , year)
    13    measures (count_ )
    14    (
    15     count_[1,1990]=count_[1,1982]+count_[11,1982]
    16    )
    17  /
        DEPTNO      MONTH       YEAR     COUNT_
            30          5       1981          1
            30         12       1981          1
            30          2       1981          2
            30          9       1981          2
            30          1       1990
            20          4       1987          1
            20          5       1987          1
            20          4       1981          1
            20         12       1981          1
            20         12       1980          1
            20          1       1990
            10          6       1981          1
            10         11       1981          1
            10          1       1982          1
            10          1       1990 As you see , the measures for the 1990 year is null...because the measure(the count(deptno)) is computed via the group by and not by the MODEL clause...
    How should i transform the above query... so as the "count_[1,1982]+count_[11,1982]" will return non-null results per deptno...????
    Thanks , a lot
    Simon

    Connected to Oracle Database 10g Express Edition Release 10.2.0.1.0
    Connected as hr
    SQL>
    SQL> SELECT department_id, MONTH, YEAR, count_
      2    FROM (SELECT e.department_id
      3                ,to_number(to_char(e.hire_date, 'mm')) MONTH
      4                ,to_number(to_char(e.hire_date, 'rrrr')) YEAR
      5                ,COUNT(e.first_name) count_
      6            FROM employees e
      7            WHERE e.department_id = 20
      8           GROUP BY e.department_id
      9                   ,to_number(to_char(e.hire_date, 'mm'))
    10                   ,to_number(to_char(e.hire_date, 'rrrr')));
    DEPARTMENT_ID      MONTH       YEAR     COUNT_
               20          8       1997          1
               20          2       1996          1
    SQL> --
    SQL> SELECT department_id, MONTH, YEAR, count_
      2    FROM (SELECT e.department_id
      3                ,to_number(to_char(e.hire_date, 'mm')) MONTH
      4                ,to_number(to_char(e.hire_date, 'rrrr')) YEAR
      5                ,COUNT(e.first_name) count_
      6            FROM employees e
      7            WHERE e.department_id = 20
      8           GROUP BY e.department_id
      9                   ,to_number(to_char(e.hire_date, 'mm'))
    10                   ,to_number(to_char(e.hire_date, 'rrrr')))
    11  model
    12  PARTITION BY(department_id)
    13  dimension BY(MONTH, YEAR)
    14  measures(count_)(
    15    count_ [1, 1990] = count_ [2, 1996] + count_ [8, 1997]
    16  );
    DEPARTMENT_ID      MONTH       YEAR     COUNT_
               20          8       1997          1
               20          2       1996          1
               20          1       1990          2
    SQL> ---
    SQL> SELECT department_id, MONTH, YEAR, count_
      2    FROM (SELECT e.department_id
      3                ,to_number(to_char(e.hire_date, 'mm')) MONTH
      4                ,to_number(to_char(e.hire_date, 'rrrr')) YEAR
      5                ,COUNT(e.first_name) count_
      6            FROM employees e
      7           GROUP BY e.department_id
      8                   ,to_number(to_char(e.hire_date, 'mm'))
      9                   ,to_number(to_char(e.hire_date, 'rrrr')))
    10  model ignore nav
    11  PARTITION BY(department_id)
    12  dimension BY(MONTH, YEAR)
    13  measures(count_)(
    14    count_ [1, 1990] = count_ [2, 1996] + count_ [8, 1997]
    15  );
    DEPARTMENT_ID      MONTH       YEAR     COUNT_
              100          8       1994          2
               30         12       1997          1
              100          3       1998          1
               30          7       1997          1
                           5       1999          1
               30         12       1994          1
               30         11       1998          1
               30          5       1995          1
              100          9       1997          2
              100         12       1999          1
               30          8       1999          1
                           1       1990          0
               30          1       1990          0
              100          1       1990          0
               90          9       1989          1
               20          8       1997          1
               70          6       1994          1
    93 rows selected
    SQL>

  • I have a production mobile Flex app that uses RemoteObject calls for all data access, and it's working well, except for a new remote call I just added that only fails when running with a release build.  The same call works fine when running on the device

    I have a production mobile Flex app that uses RemoteObject calls for all data access, and it's working well, except for a new remote call I just added that only fails when running with a release build. The same call works fine when running on the device (iPhone) using debug build. When running with a release build, the result handler is never called (nor is the fault handler called). Viewing the BlazeDS logs in debug mode, the call is received and send back with data. I've narrowed it down to what seems to be a data size issue.
    I have targeted one specific data call that returns in the String value a string length of 44kb, which fails in the release build (result or fault handler never called), but the result handler is called as expected in debug build. When I do not populate the String value (in server side Java code) on the object (just set it empty string), the result handler is then called, and the object is returned (release build).
    The custom object being returned in the call is a very a simple object, with getters/setters for simple types boolean, int, String, and one org.23c.dom.Document type. This same object type is used on other other RemoteObject calls (different data) and works fine (release and debug builds). I originally was returning as a Document, but, just to make sure this wasn't the problem, changed the value to be returned to a String, just to rule out XML/Dom issues in serialization.
    I don't understand 1) why the release build vs. debug build behavior is different for a RemoteObject call, 2) why the calls work in debug build when sending over a somewhat large (but, not unreasonable) amount of data in a String object, but not in release build.
    I have't tried to find out exactly where the failure point in size is, but, not sure that's even relevant, since 44kb isn't an unreasonable size to expect.
    By turning on the Debug mode in BlazeDS, I can see the object and it's attributes being serialized and everything looks good there. The calls are received and processed appropriately in BlazeDS for both debug and release build testing.
    Anyone have an idea on other things to try to debug/resolve this?
    Platform testing is BlazeDS 4, Flashbuilder 4.7, Websphere 8 server, iPhone (iOS 7.1.2). Tried using multiple Flex SDK's 4.12 to the latest 4.13, with no change in behavior.
    Thanks!

    After a week's worth of debugging, I found the issue.
    The Java type returned from the call was defined as ArrayList.  Changing it to List resolved the problem.
    I'm not sure why ArrayList isn't a valid return type, I've been looking at the Adobe docs, and still can't see why this isn't valid.  And, why it works in Debug mode and not in Release build is even stranger.  Maybe someone can shed some light on the logic here to me.

  • Is anyone else having email problems such as apps exiting in the middle of an email? It may be a wireless issue. I use First Class for work and yahoo email for personal. I will be in the middle of typing a long email and the app just quits, all data lost.

    Is anyone else having email problems such as apps exiting in the middle of an email? It may be a wireless issue. I use First Class for work and yahoo email for personal. I will be in the middle of typing a long email and the app just quits, all data lost.

    Have you tried restarting or resetting your iPad?
    Restart: Press On/Off button until the Slide to Power Off slider appears, select Slide to Power Off and, after the iPad shuts down, then press the On/Off button until the Apple logo appears.
    Reset: Press the Home and On/Off buttons at the same time and hold them until the Apple logo appears (about 10 seconds). Ignore the "Slide to power off"

  • Using sqlldr when source data column is 4000 chars

    I'm trying to load some data using sqlldr.
    The table looks like this:
    col1 number(10) primary key
    col2 varchar2(100)
    col3 varchar2(4000)
    col4 varchar2(10)
    col5 varchar2(1)
    ... and some more columns ...
    For current purposes, I only need to load columns col1 through col3. The other columns will be NULL.
    The source text data looks like this (tab-delimited) ...
    col1-text<<<TAB>>>col2-text<<<TAB>>>col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    END-OF-RECORD
    There's nothing special about the source data for col1 and col2.
    But the data for col3 is (usually) much longer than 4000 chars, so I just need to truncate it to fit varchar2(4000), right?
    The control file looks like this ...
    LOAD DATA
    INFILE 'load.dat' "str 'END-OF-RECORD'"
    TRUNCATE
    INTO TABLE my_table
    FIELDS TERMINATED BY "\t"
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    col1 "trim(:col1)",
    col2 "trim(:col2)",
    col3 char(10000) "substr(:col3,1,4000)"
    I made the column 3 specification char(10000) to allow sqlldr to read text longer than 4000 chars.
    And the subsequent directive is meant to truncate it to 4000 chars (to fit in the table column).
    But I get this error ...
    Record 1: Rejected - Error on table COL3.
    ORA-01461: can bind a LONG value only for insert into a LONG column
    The only solution I found was ugly.
    I changed the control file to this ...
    col3 char(4000) "substr(:col3,1,4000)"
    And then I hand-edited (truncated) the source data for column 3 to be shorter than 4000 chars.
    Painful and tedious!
    Is there a way around this difficulty?
    Note: I cannot use a CLOB for col3. There's no option to change the app, so col3 must remain varchar2(4000).

    You can load the data into a staging table with a clob column, then insert into your target table using substr, as demonstated below. I have truncated the data display to save space.
    -- load.dat:
    1     col2-text     col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    XYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    END-OF-RECORD-- test.ctl:
    LOAD DATA
    INFILE 'load.dat' "str 'END-OF-RECORD'"
    TRUNCATE
    INTO TABLE staging
    FIELDS TERMINATED BY X'09'
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    col1 "trim(:col1)",
    col2 "trim(:col2)",
    col3 char(10000)
    SCOTT@orcl_11gR2> create table staging
      2    (col1 varchar2(10),
      3       col2 varchar2(100),
      4       col3 clob)
      5  /
    Table created.
    SCOTT@orcl_11gR2> host sqlldr scott/tiger control=test.ctl log=test.log
    SCOTT@orcl_11gR2> select * from staging
      2  /
    COL1
    COL2
    COL3
    1
    col2-text
    col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    XYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    1 row selected.
    SCOTT@orcl_11gR2> create table my_table
      2    (col1 varchar2(10) primary key,
      3       col2 varchar2(100),
      4       col3 varchar2(4000),
      5       col4 varchar2(10),
      6       col5 varchar2(1))
      7  /
    Table created.
    SCOTT@orcl_11gR2> insert into my_table (col1, col2, col3)
      2  select col1, col2, substr (col3, 1, 4000) from staging
      3  /
    1 row created.
    SCOTT@orcl_11gR2> select * from my_table
      2  /
    COL1
    COL2
    COL3
    COL4       C
    1
    col2-text
    col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    XYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    1 row selected.

  • Can I use Time Machine for data recovery?

    Hi All,
    This might seem a bit confusing at first, so please be patient - and thanks in advance.
    I'll try to make this as simple as possible:
    1) I had an external HDD that (i stupidly didn't  back up) that I used to free up memory on my MB Pro. I knocked it off my desk and it's now unrecoverable - there is a 'might be able to get some data back. but it will cost up to $2000' option.
    2) To free up said memory, I moved all the big stuff - photos, movies, documents, iTunes libray to this HDD.
    3) Now, before I did this, I was already - and still am - using ANOTHER HDD for my Time Machine back ups.
    4) Does anyone have any idea as to whether I could salvage the above mentioned data from an old TM snaphot (?) - assuming it hasn't been over written already?
    5) And, if it's possible, can it be done without rolling back the MB to the appropiate date??

    Hi Thomas,
    Thanks for the quick response! But I didn't think I explained myself very well :s
    I originally had everything on the MB internal HDD, and only had 1 HDD which was / is still, my TM HDD.
    I then moved all the photos, music etc to the 2nd HDD, to free up memory on the MB HDD.
    I dropped the 2nd HDD and it's beyond recovery.
    What my thought is - given that originally all the data would have been backed up on the TM HDD, can I recover it from there? (If it hasn't been over written of course)
    I'm unsure if the 2nd HDD was included in TM backups. It was usually connected to the MB when TM was running...

  • I have an iPhone with an unlimited data plan. Is there a way to go online with a wifi only iPad using my phone for access?

    I have an iPhone with an unlimited data plan. Is there a way to go online with a wifi only iPad using my phone for access?

    If your iPhone cell carrier allows you to use Tethering, you can turn your iPhone into a wifi hotspot, then your iPad Air can connect to that for internet access.  Not all cell carriers allow tethering without you signing up for additional fees/services.  For example, AT&amp;T in the US still has grandfathered "unlimited" data plans, but they do not allow tethering for those plans.  With AT&amp;T you need to switch to a Mobile Share plan (or a tiered plan if still available) to use tethering (legally).

  • Steps to load the data by using flat file for hierarchies in BI 7.0

    Hi Gurus,
    steps to load the data by using flat file for hierarchies in BI 7.0

    hi ,
    u will get the steps int he following blog by Prakash Bagali
    Hierarchy Upload from Flat files
    regards,
    Rathy

Maybe you are looking for

  • I get two screens while uploading a document in a library

    Hi, I get two upload screens while uploading a document to a library. Take a look at the below screenshots. Can somebody please tell me how can I club these two to one. I need only one upload screen with all the metadata listed. Many thanks for your

  • New internal Hard Drive for my Ibook G4 (March 2005); Panther 10.3.9

    Hi there, hope I'm not posting a post which is already answered, but couldn't find any appriopriate one. The thing is I want to buy a bigger HDD for my lovely Ibook. Questions: 1) I looked in "about this MAC" and found out, that it should be an ATA d

  • A/V out-of-sync issue for FCP captures

    I've read many of the out-of-sync posts on this board, most of which seem to indicate that the problem is usually caused by either dropped frames, bad timecode breaks, edit errors, or disunity between FCP capture and sequence settings, compared with

  • Xi functionalities

    hi friends, I would like to have an elaborate explanation for my below queries:- Does XI has the ability to integrate with a Webservice with minimum changes to out of the box application? Does XI has the ability to prove robust performance and scalab

  • ARD 3.7.2 sleep-related crash

    Lately I've noticed that when I let my iMac/latest Mavericks Admin workstation sleep with ARD 3.7.2 running, upon waking I see ARD has crashed unexpectedly. This happens daily: Crashed Thread:  2 Exception Type:  EXC_BAD_ACCESS (SIGSEGV) Exception Co