Remote call to a procedure over DB Link - Datatype mismatch

Hi All,
I've a procedure taking an array as input parameter. I invoked this procedure from other DB using a DB link but got DATATYPE mismatch error. I've given same name to the array type in both the DBs and used the same UID.
Please help me out.
Regards,
MK
CREATE OR REPLACE TYPE type_market_places IS table OF VARCHAR2 (100);
CREATE OR REPLACE PROCEDURE racsf_populate_site (p_in_mrktplaces type_market_places)
IS
   --lv_sql   VARCHAR2 (32000);
BEGIN
END;
------Invoking the procedure from the other DB using DB link
DECLARE
  P_IN_MRKTPLACES type_market_places;
BEGIN
  select col1 bulk collect into P_IN_MRKTPLACES;
  RACSF_POPULATE_SITE@DB_link ( P_IN_MRKTPLACES );
  COMMIT;
END;

You call DATA TYPE from DB_link datatype
try this, please
DECLARE
  P_IN_MRKTPLACES type_market_places@DB_link;
BEGIN
  select col1 bulk collect into P_IN_MRKTPLACES;
  RACSF_POPULATE_SITE@DB_link ( P_IN_MRKTPLACES );
  COMMIT;
END;

Similar Messages

  • Execute procedure over db link

    I am trying to execute a stored procedure over database link and I get the following error
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER@DBCS(
    ERROR at line 12:
    ORA-06550: line 12, column 2:
    PLS-00306: wrong number or types of arguments in call to 'SET_UPDATE_CONFLICT_HA
    NDLER'
    ORA-06550: line 12, column 2:
    PL/SQL: Statement ignored
    However when I execute the same peice of code by logging on to the server the code executes successfully. "DBCS" is the database link name.
    CODE which I am executing over DB LINK
    DECLARE
    COL_NAME DBMS_UTILITY.NAME_ARRAY;
    CURSOR C1 IS
    SELECT COLUMN_NAME FROM ALL_TAB_COLUMNS WHERE OWNER='SCOTT' AND TABLE_NAME='DEPT';
    BEGIN
    OPEN C1;
    FETCH C1 BULK COLLECT INTO COL_NAME;
    CLOSE c1;
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER@DBCS(
    OBJECT_NAME => 'SCOTT.DEPT',
    METHOD_NAME => 'OVERWRITE',
    RESOLUTION_COLUMN => COL_NAME(1),
    COLUMN_LIST => COL_NAME);
    END;
    CODE without DB LINK (executed this on db where db link is pointing too.
    DECLARE
    COL_NAME DBMS_UTILITY.NAME_ARRAY;
    CURSOR C1 IS
    SELECT COLUMN_NAME FROM ALL_TAB_COLUMNS WHERE OWNER='SCOTT' AND TABLE_NAME='DEPT';
    BEGIN
    OPEN C1;
    FETCH C1 BULK COLLECT INTO COL_NAME;
    CLOSE c1;
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    OBJECT_NAME => 'SCOTT.DEPT',
    METHOD_NAME => 'OVERWRITE',
    RESOLUTION_COLUMN => COL_NAME(1),
    COLUMN_LIST => COL_NAME);
    END;
    Please guide.

    orausern wrote:
    Hi,
    We are on Oracle 11.2.0.2 on Solaris 10. I have two databases on this version of Oracle. From one db I have execute a stored procedure via db link to another database. The store procedure has dynamic sqls (execute immediate) and it uses commit statement as part of the code. Is it alllowed to execute this procedure over db link? I am not aware on the implications on stored proc execution over db link and will be thankful for help.
    Thanks,if no error is thrown, why are you looking for problems where none exist?

  • Calling Hana Store Procedure thorugh Eclipse link

    Hi,
    Anybody having idea of calling HANA store procedure through Eclipse Link JPA.
    Procedure has Scalar IN Parameter and Table Type as OUT Param:
    CREATE PROCEDURE tree_view (IN  topicid BIGINT,OUT qtree tt_tree)
        LANGUAGE SQLSCRIPT
        READS SQL DATA AS  
    BEGIN
    /// SQL Statements
    END;
    Currently i an calling the procedure as
    entiyManager.getTransaction().begin();
                java.sql.Connection connection = entiyManager.unwrap(java.sql.Connection.class);
                connection.setAutoCommit(false);
                CallableStatement cst = connection.prepareCall("call _SYS_BIC.\"workspace.procedures/tree_view\"(?,?)");
                cst.setLong(1, identifier);
                cst.execute();
                ResultSet rs =  cst.getResultSet();
         entiyManager.getTransaction().commit();
    But need more cleaned way like using StoreProcedureQuery or PLSQLStoredProcedureCall.

    Slightly missed the point.
    The question was about providing an INPUT variable.
    An in fact it is not possible to call a procedure with an table type input variable from the SQL console.
    You have to build a wrapper to call such a procedure:
    create type myusers as table (user_name nvarchar(60));
    create procedure count_users (IN user_tab myusers, OUT user_count bigint )
    language sqlscript
    as
    begin
        select count(*) into user_count from :user_tab;
    end;
    call count_users (? , ?)
    Could not execute 'call count_users (? , ?)' .
    SAP DBTech JDBC: [7]: feature not supported:
    Parameterized input table parameter is not allowed: line 1 col 19 (at pos 18)
    A wrapper for this could look like this:
    create procedure call_cu (out user_count bigint)
    language sqlscript
    as
    begin
        v_users = select user_name from users;
        call count_users (:v_users, :user_count);
    end;
    call call_cu (?)
    --> 28
    Unlike SQL*Plus for PL/SQL, the SQL console of SAP HANA is not a SQL Script runtime shell.
    - Lars

  • Execute procedure over database link

    Hi,
    I''m working on a package where you can select the required database, paste some query and with clicking on a button it will execute and create an explain plan for this query.
    Therefore i've create a procedure on each database that put's the explain plan in the PLAN_TABLE.
    To do this i need to execute this prodecure from the package. I use the following code:
    DBA_OWNER.DB_PCD_EXPLAIN_PLAN@DBA_LINK_533.WORLD(p_query);
    This is working fine as this database link allready exists to the required database. But i want to make the database link name variable. I've allready created the dynamic database link:
    l_link varchar2(20) := 'DBA_LINK';
    l_link_nr number(4) := dbms_random.value(0,1000);
    l_db varchar2(20) := l_link||'_'||l_link_nr;
    l_statement varchar2(4000);
    begin
    l_statement := 'create database link '||l_db||' connect to **** identified by ***** using '''||p_database||'.WORLD''';
    execute immediate l_statement ;
    So now i have the dynamic databaselink that is created each time i call the package.
    now i want to use this databaselink to call the procedure. this is where it goes wrong.
    The code i'm using:
    DBA_OWNER.DB_PCD_EXPLAIN_PLAN@l_db(p_query);
    is not acceoted when i build the package.
    The follwoing error message is showed.
    PACKAGE BODY DBA_OWNER.PCK_EXPLAIN_PLAN
    On line: 66
    PLS-00352: Unable to access another database 'L_DB'
    Does someone know how to use this variable to execute the procedure on the remote db?

    Any procedures you call have to be accessible at compile time, so the database link must also exist at compile time.
    You can get around this by putting your procedure call to DB_PCD_EXPLAIN_PLAN inside dynamic SQL, the same as you are doing to create the database link. Dynamic SQL is not checked until runtime.
    Think about the risk if this procedure is exploited. You are creating a way for someone to run arbitrary code in an any database in your environment, with presumably a highly-privileged ID, if it can run an explain plan against any schema in any target database.
    This is exposing some significant security issues. Read up on SQL injection, and have someone else review the code for security issues before deploying this.

  • Calling remote procedure through database link

    Hi,
    I have a procedure in a package with a type I delcared as follows:
    Database1 :
    create or replace package p1
    is
    TYPE dependents_rec IS RECORD (
    name VARCHAR2 (80),
    dob date);
    TYPE dependents IS TABLE OF dependents_rec
    INDEX BY BINARY_INTEGER;
    procedure proc1 (p_id in number,
    p_dependents out dependents );
    end;
    On another database database2 i want to call the above procedure from within a procedure in a package that works as a wrapper package with similar structure.
    I recieve the error "Wrong number or types of parameters" error because type dependents is not the same as within the package in database 1.
    How can I call the procedure p1.proc1 over a database link ?
    Thank you

    Hi,
    On database1:
    CREATE OR REPLACE PACKAGE types_pkg is
    TYPE dependents_rec IS RECORD (
    NAME VARCHAR2 (80),
    sex NUMBER (1),
    birth_date DATE,
    birth_place NUMBER (3),
    status NUMBER (1),
    unique_id VARCHAR2 (15)
    TYPE dependents IS TABLE OF dependents_rec
    INDEX BY BINARY_INTEGER;
    end;
    CREATE OR REPLACE PACKAGE wrapper_pkg_demo
    AS
    PROCEDURE read_person_no (
    p_person_id NUMBER,
    p_dependent_info OUT types_pkg.dependents
    END;
    CREATE OR REPLACE PACKAGE BODY wrapper_pkg_demo
    AS
    PROCEDURE read_person_no (
    p_person_id NUMBER,
    p_dependent_info OUT types_pkg.dependents
    IS
    BEGIN
    person_pkg.read_person_no (p_person_id, p_dependent_info);
    END;
    END;
    On database 2:
    CREATE OR REPLACE PACKAGE person_pkg
    AS
    PROCEDURE read_person_no (
    p_person_id NUMBER,
    p_dependent_info OUT types_pkg.dependents@database1
    END;
    CREATE OR REPLACE PACKAGE body person_pkg
    AS
    PROCEDURE read_person_no (
    p_person_id NUMBER,
    p_dependent_info OUT types_pkg..dependents@database1
    IS
    BEGIN
    null;
    END;
    END;
    When compiling the package body on database 1
    ORA-04052: error occurred when looking up remote object TYPES@MOI
    ORA-00604: error occurred at recursive SQL level 1
    ORA-02019: connection description for remote database not found
    Noting that the database link are working properly and when compiling the package on database 2, it compile successfully
    Thank you

  • Calling procedure over dblink - PLS-00306: wrong number or types of args

    Getting the above error when calling a procedure residing in a package on another database over a link.
    The arguments are an object and, a table of objects. The variables being passed are based on objects declared locally that look exactly like the ones the procedure is expecting.
    I haven't done this before so - I'm not sure what the issue is. The execute privilege has been granted on the dependent objects remotely as well as locally. I don't know what else to try and, I have not had any luck looking for a similiar issue online. Please help.

    Thanks but - before doing that I just wanted to see if someone can review this and see if this is my problem:
    http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96594/adobjbas.htm#466038
    We are using objects declared on the remote database. And, my variables I'm passing are of reference to exact copies on the host. But, from what I understand from the link above - you can't use object types across links. I'm completely lost as to whether this is the issue or not. It seems like a fairly obscure topic as far as documentation goes. If this is the issue however - what is the way around it? We are loading a table of objects with 40,000 plus rows of data to pass to a procedure remotely. If we can't pass the object type - how do we get the rows to the procedure (and, we aren't against modifying the remote procedure - just need to know the best method).
    Thanks in advance for your time - help.

  • Procedure performance over db link

    In our source database, we are having one procedure that inserts data from one view and is inserting it into the another table.
    When we are running the procedure on this source system itself, it is getting executed in 3-4 mins
    Issue is coming when we are executing the procedure across database link.
    exec proc@source
    Plan is getting changed and is taking huge time. Are we hitting some bug or we have to stalablise the plan by addding hints is the only solution.
    Database version is 10.2.0.5
    Best Regards

    864411 wrote:
    In our source database, we are having one procedure that inserts data from one view and is inserting it into the another table.
    When we are running the procedure on this source system itself, it is getting executed in 3-4 mins
    Issue is coming when we are executing the procedure across database link.
    WHY across dblink?
    >
    >
    exec proc@source
    Plan is getting changed and is taking huge time. Doctor, it hurts when I poke myself in the eye.
    How can I get the pain to stop?
    So stop poking yourself in the eye!

  • Facing problem in distributed queries over Oracle linked server

    Hi,
    I have a SQL Server 2005 x64 Standard Edition SP3 instance. On the instance, we had distributed (4 part) queries over an Oracle linked server running fine just a few hours back but now they have starting taking too long. They seem to work fine when OPENQUERY
    is used. Now I have a huge number of queries using the same mechanism and is not feasible for me to convert all queries to OPENQUERY, please help in getting this resolved.
    Thanks in advance.

    Hi Ashutosh,
    According to your description, you face performance issues with distributed queries and
    it is not feasible for you to convert all queries to
    OPENQUERY. To improve the performance, you could follow the solutions below:
    1. Make sure that you have a high-speed network between the local server and the linked server.
    2. Use driving_site hint. The driving site hint forces query execution to be done at a different site than the initiating instance. 
    This is done when the remote table is much larger than the local table and you want the work (join, sorting) done remotely to save the back-and-forth network traffic. In the following example, we use the driving_site hint to force the "work"
    to be done on the site where the huge table resides:
    select /*+DRIVING_SITE(h)*/
    ename
    from
    tiny_table t,
    huge_table@remote h
    where
    t.deptno = h.deptno;
    3. Use views. For instance, you could create a view on the remote site referencing the tables and call the remote view via the local view as the following example.
    create view local_cust as select * from cust@remote;
    4. Use procedural code. In some rare occasions it can be more efficient to replace a distributed query by procedural code, such as a PL/SQL procedure or a precompiler program.
    For more information about the process, please refer to the article:
    http://www.dba-oracle.com/t_sql_dblink_performance.htm
    Regards,
    Michelle Li

  • How to call MSSQL stored procedure from oracle database

    MSSQL and Oracle databases are linked thru ODBC link using Oracle HSODBC.
    I can query MSSQL table or view from Oracle Database using standard notation for acessing remote objects schema.object@dblink_name...
    Can anybody give me syntax for calling MSSQL stored procedure thru ODBC database link?
    I tried syntax exec schema.stored_procedure@dblink_name but it doesn't work...i'm getting schema.stored_procedure must be declared error...
    Tnx,in advance!
    Dejan Botica

    Oracle database 10gR2.
    MSSQL2000 database.
    For example query:
    select * from dbo.Tbl_Test@kron@dw_jamnica; works fine...
    ...while for example exec dbo.Test@kron@dw_jamnica;
    reports error:
    ORA-06550: line 1, column 7:
    PLS-00201: identifier 'DBO.TEST@KRON@DW_JAMNICA' must be declared
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    Table Tbl_Test and procedure Test exists in MSSQL instance.
    Regards,
    Dejan

  • Can we create pl/sql code over database link?

    Hi All,
    We are using Oracle 9i database.
    I want to know if we can create or modify pl/sql code (procedue, package, etc) over the database link? That means - can we create a procedure in remote database using the db link?
    Thanks,
    Dnyanesh

    yes, I can connect to the remote database directly and create/modify the packages.
    But I want the users not to create them using the db link.
    Is there any way to create the pl/sql package over database link?
    Thanks,
    Dnyanesh

  • Create orders in CRM with repetitive CRM_ORDER_MAINTAIN remote calls

    Dear All,
    Though we all must have done a lot of playing around with CRM but I am sure that a lot of us would find this as a tricky one. We know that CRM Web-UI framework at the back-end is actually a repetitive, Remote call framework to CRM APIs and SICFs.
    Now I have a requirement to create a sales order for example; I shall provide custom screens on HTML or UI5 and I have to provide a similar experience to user where in the user will add values to fields on the screen and press enter etc. Hence I will initially REMOTELY( Through custom RFC ) call CRM_ORDER_MAINTAIN and create a blank sales order( as done on crm ui framework ) and then on each subsequent enter I want to call CRM_ORDER_MAINTAIN FM with the guid I already have from first call.
    I will call ORDER_SAVE in the end only, to prevent wastage of memory. Also I want to avoid generating new GUIDS at each enter or round trip, hence want to continue with the guid/buffer values of the first call till the save.
    My doubt is in case I remotely call CRM_ORDER_MAINTAIN the second time with the GUID I got in the first call, will the system still have all those buffered values( order_header / guid ) of the first call in the second call or simply saying : will CRM_ORDER_MAINTAIN work in this case even if I pass the guid in subsequent calls?
    P.S : Please don't provide alternate solutions of saving on each enter / new guid generation on each enter.
    /Hasan

    Absolutely my dear friend, I am happy that someone is surely taking interest .
    We have long understood that a jazzy/intuitive UI is always welcomed over a slow/dull UI.
    Wasted guids is a case when you create a new transaction through MAINTAIN but then don't save it, then in the same session you again send a create command( for the same transaction ), new guids are requested from the system when you call MAINTAIN for header, items, partner links etc. and this cycle continues until you save.
    The best practice is not to waste this because a create has a lot of overheads over a change like : create new guids , create time checks etc. Hence CRM architecture is also based on a single create per transaction per session. CRM UI runs on a statefull HTTP session , which is the reason behind the buffers being retained even on repeated RFC/HTTP calls.
    /Hasan

  • Force Routing Over second link?

    We have a MPLS WAN connecting our offices from our Service Provider. Our Head Office has a larger 50Mbps pipe and a remote office has 2 separate 2Mbps links (lets call them link1 and link2). Right now all traffic only goes over Link1 at the remote office as per all the BGP routing. I can make a static route at the remote office for specific traffic to go over link2 and it will successfully send over that link. My question is, is there a way to get traffic from Head Office to go to Link2 instead of Link1?

    It primary depends on BGP routing. You can try to change the way ISP send traffic to you branch using different BGP path attribute  in advertisement on the two links but you have to negotiated with ISP because they can also "force" their netwrok to use on path instead of the other for example using BGP local preference. I suggest you to read here to understand BGP algorithm:
    http://www.cisco.com/c/en/us/support/docs/ip/border-gateway-protocol-bgp/13753-25.html
    Bye,
    enrico

  • Interactive report performance problem over database link - Oracle Gateway

    Hello all;
    This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
    The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
    query like this: (due to sensitivity issue, I can not disclose real table name)
    SELECT apex_item.checkbox(1,b.col3)
    , a.col1
    , a.col2
    FROM table_one a
    , table_two b
    WHERE a.col3 = 12345
    AND a.col4 = 100
    AND b.col5 = a.col5
    table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
    Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
    I would like to get ideas from people how to resolve or speed-up the query?
    Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
    Thank you.
    Shaun S.

    Hi Shaun
    Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
    By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
    Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two),
    sourceqry AS
    (SELECT  b.col3 x
           , a.col1 y
           , a.col2 z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5)
    SELECT apex_item.checkbox(1,x), y , z
    FROM sourceqry
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two)
    SELECT  apex_item.checkbox(1,x), y , z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
    We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
    Sorry for all the questions but it helps to answer the question, if I can.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

  • Report over database link error

    Good morning all.
    I have a database link LINKA from USERA on LOCALSERVER to USERB on REMOTESERVER.
    (create database link LINKA connect to USERB identified by mypassword using validtns)
    In USERC on REMOTESERVER I have a TABLEX, with grant all on TABLEX to USERB on REMOTESERVER.
    In USERB on REMOTESERVER I have a view TABLEXV, select * from USERC.TABLEX
    From USERA on LOCALSERVER I can select * from TABLEXV@LINKA. In fact I can insert/update/delete tablexv from USERA.
    I also have a view TABLEXV on USERA, select * from TABLEXV@LINKA. I can also select/insert/update/delete from USERA.TABLEXV.
    So far so good - all working as expected.
    Then, my APEX app on LOCALSERVER, interactive report, parsing schema USERA, querying TABLEXV, I get the following:
    when "Exclude Link Column", I get the error "The requested URL /pls/apex/f was not found on server.
    when "Link to Single Row View", report displays correctly.
    when "Link to Custom Target", I get the error "The requested URL /pls/apex/f was not found on server.
    When I run the select * from TABLEXV as a regular report (i.e. not interactive), I also get the error.
    So the only time I get the report displayed correctly is using IR and "Link to Single Row View".
    However, clicking on the link (for Single Row View) I get "The requested URL /pls/apex/wwv_flow.show was not found on this server.".
    To summarize:
    - accessing table via view over database link
    - works outside of apex without issue
    - inside only works when Link to Single Row View set
    - But clicking on link fails
    I could live with just having the Link to Singe Row View option if I could disable displaying of the icon so that the user cant click on it.
    APEX 3.2.00.27
    EE Database - 10.2.0.3 (yes I know out of support)
    Remote EE Database - 10.2.0.5
    Real object names replaced to protect the innocent!
    Anyone got any clues?
    I have used remote tables many times in my APEX applications and never had this issue.
    Thanks,
    Rob @ very confused .com

    Good morning all.
    I have a database link LINKA from USERA on LOCALSERVER to USERB on REMOTESERVER.
    (create database link LINKA connect to USERB identified by mypassword using validtns)
    In USERC on REMOTESERVER I have a TABLEX, with grant all on TABLEX to USERB on REMOTESERVER.
    In USERB on REMOTESERVER I have a view TABLEXV, select * from USERC.TABLEX
    From USERA on LOCALSERVER I can select * from TABLEXV@LINKA. In fact I can insert/update/delete tablexv from USERA.
    I also have a view TABLEXV on USERA, select * from TABLEXV@LINKA. I can also select/insert/update/delete from USERA.TABLEXV.
    So far so good - all working as expected.
    Then, my APEX app on LOCALSERVER, interactive report, parsing schema USERA, querying TABLEXV, I get the following:
    when "Exclude Link Column", I get the error "The requested URL /pls/apex/f was not found on server.
    when "Link to Single Row View", report displays correctly.
    when "Link to Custom Target", I get the error "The requested URL /pls/apex/f was not found on server.
    When I run the select * from TABLEXV as a regular report (i.e. not interactive), I also get the error.
    So the only time I get the report displayed correctly is using IR and "Link to Single Row View".
    However, clicking on the link (for Single Row View) I get "The requested URL /pls/apex/wwv_flow.show was not found on this server.".
    To summarize:
    - accessing table via view over database link
    - works outside of apex without issue
    - inside only works when Link to Single Row View set
    - But clicking on link fails
    I could live with just having the Link to Singe Row View option if I could disable displaying of the icon so that the user cant click on it.
    APEX 3.2.00.27
    EE Database - 10.2.0.3 (yes I know out of support)
    Remote EE Database - 10.2.0.5
    Real object names replaced to protect the innocent!
    Anyone got any clues?
    I have used remote tables many times in my APEX applications and never had this issue.
    Thanks,
    Rob @ very confused .com

  • I have a production mobile Flex app that uses RemoteObject calls for all data access, and it's working well, except for a new remote call I just added that only fails when running with a release build.  The same call works fine when running on the device

    I have a production mobile Flex app that uses RemoteObject calls for all data access, and it's working well, except for a new remote call I just added that only fails when running with a release build. The same call works fine when running on the device (iPhone) using debug build. When running with a release build, the result handler is never called (nor is the fault handler called). Viewing the BlazeDS logs in debug mode, the call is received and send back with data. I've narrowed it down to what seems to be a data size issue.
    I have targeted one specific data call that returns in the String value a string length of 44kb, which fails in the release build (result or fault handler never called), but the result handler is called as expected in debug build. When I do not populate the String value (in server side Java code) on the object (just set it empty string), the result handler is then called, and the object is returned (release build).
    The custom object being returned in the call is a very a simple object, with getters/setters for simple types boolean, int, String, and one org.23c.dom.Document type. This same object type is used on other other RemoteObject calls (different data) and works fine (release and debug builds). I originally was returning as a Document, but, just to make sure this wasn't the problem, changed the value to be returned to a String, just to rule out XML/Dom issues in serialization.
    I don't understand 1) why the release build vs. debug build behavior is different for a RemoteObject call, 2) why the calls work in debug build when sending over a somewhat large (but, not unreasonable) amount of data in a String object, but not in release build.
    I have't tried to find out exactly where the failure point in size is, but, not sure that's even relevant, since 44kb isn't an unreasonable size to expect.
    By turning on the Debug mode in BlazeDS, I can see the object and it's attributes being serialized and everything looks good there. The calls are received and processed appropriately in BlazeDS for both debug and release build testing.
    Anyone have an idea on other things to try to debug/resolve this?
    Platform testing is BlazeDS 4, Flashbuilder 4.7, Websphere 8 server, iPhone (iOS 7.1.2). Tried using multiple Flex SDK's 4.12 to the latest 4.13, with no change in behavior.
    Thanks!

    After a week's worth of debugging, I found the issue.
    The Java type returned from the call was defined as ArrayList.  Changing it to List resolved the problem.
    I'm not sure why ArrayList isn't a valid return type, I've been looking at the Adobe docs, and still can't see why this isn't valid.  And, why it works in Debug mode and not in Release build is even stranger.  Maybe someone can shed some light on the logic here to me.

Maybe you are looking for