Security in essbase at the database level

i need to edit users ability to access various databases within an application in essbase. the user has been provisioned in shared services to the app, but, it appears to further restrict particular databases, that has to be done in essbase. right? i apparently don't have that "role" assigned to me. what is that "role" that i need to have assigned to me?

Hi
It sounds like you need to create specific Essbase security filters, these are created in Essbase and can then either be assigned to users in Essbase or via Shared Services. Filters are specific to databases so you can have different access to different databases or none at all to some.
Take a look at the Essbase database administrators guide.
Hope this helps
Stuart

Similar Messages

  • Row-level security at the Database level

    We need Row-level security at the Database level, where the user who logs in to Crystal reports, should be able to fetch only those rows from the database that he is entitled to see. For this, the login name of the user is passed to a stored procedure which sets the context of the DB session and restricts the data retrieved.
    We are not looking for row-level security where the data is first retrieved and then filtered based on the user login name. However, we are definitely looking for a way to set a context for a database session based on the user login name, even before we start fetching data. So effectively, the user who logs in will fetch only those rows which he is supposed to see.
    Issue:
    We face a problem of not being able to pass a variable (something like 'BOUSER' for BO which works, whereas, 'CurrentCEUserName' for Crystal Reports, which doesn't work), to the database stored procedure to set the context.
    Please let us know if we can use 'CurrentCEUserName' variable in Crystal in the same way as 'BOUSER' is used in ConnectInit for BO? We would like to know how we could pass any variable in Crystal Reports which holds the user login information to a stored procedure.
    Also, please suggest alternate ways to achieve this security restriction, if any.

    Hi
    A previous database had a personnel table with their station name, district and region, with a field holding their logon name.  We also had an activity table with the fields referring to the activity, and a field of Station, district and region it occured in.
    By linking the individual rows in an activity table to the personnel table on the station name field, we then used the CurrentCEUserName to filter on the personnel.  This returned only the records in the activity table where the station the activity took place at was the same as the station associated with the selected personnel who has logged on.
    The additional bonus was if we linked it on District or region we had the same result but at a greater level. ie all activity in the logged on personell's District or if linked on region, then their region.
    The personnel table was maintained by the system administrators, so maintenance was low.
    I hope this helps.
    Kevin

  • Updates to the table from the database level.

    Hi Dear All,
    If we do some updates to the table at the Database Level, like i deleted some records from the table at the Oracle level. But I'm still able to see the same deleted records from the Data Dictionary(SE11) at the application level.
    Can you pl explain the mechanism, that how it is possible and why.
    best regards
    Mahesh

    transparent tables store data directly....if you delete some data from transparent tables, the same is reflected in the database (oracle) but the reverse is not true...if you modify the database table contents directly...the dictionary table remains intact...
    transparent tables have a one-to-one relationship with the database tables....
    hope that clarifies a bit....
    (somebody correct me if i am horribly wrong)

  • User defined tables:  amending Index on the database level. Opinions???

    Hi everybody who has some spare time to read my stuff
    I had a problem that some of you might have had. I have a user defined table, let’s call it ProductTypes. Now system by default creates two columns in this table, one is Code (primary key) and another is Name (Index). I have added third column called Department. Now, if I wanted to add the following data (see bellow) to the table I would have had a constraint violation message pointing me out that I have problems with indexing.
    Code, Name, Department
    1, Cream, Fragrances
    2, Cream, Beauty Products
          ^^
    I could thing of couple workarounds of this problem
    1. Is to duplicate Code into Name and storing rest of the data using user columns
    Code, Name, Product Name, Department
    1, 1,Cream, Fragrances
    2, 2, Cream, Beauty Products
    This approach isn’t very convenient as it requires UI development should we decide to attach this table to the Item master data form in a form of combo box.
    2. Is to amend Index on the database level. Initially, the index KProductTypes_Name consisted of only one column Name, what I have done is added another column which is Code to indexing. I don’t see how this can harm database consistency or damage the core system.  Please correct me if I am wrong.
    Another way of amending index in order to solve my problem could be choosing ignoring duplicate values option for column Name.
    Please let me know what are your thoughts.
    Best wishes

    > Why don't you try adding a trigger 'instead of
    > insert' where code = max(code)1 and name=max(name)1
    > and use only user columns for your data. This
    > provided you know SQL basics.
    in this scenario we would have to do UI SDK development for the output and going to have an extra column with meaningless data in it.

  • Enable single item recovery with two retention settings at the database level.

    Hello All,
    We have an Exchange 2010 SP3 RU4 environment and planning on moving from third party archives solution to Native Exchange archives for cost reduction purposes, upgrading to Exchange 2013 to benefit from added in place features is not within scope at
    this stage.
    We are looking at implementing the following steps and want to know if it will work:
    1-Create archive DB(s) as per our usage and growth projections
    2-Enable archives for all our users and migrate current archive content to it.
    3-Create Retention Tag/Policy to move all records from live to archive "Age limit for retention" 90 days (no retention tags on the policy)
    4-Enable Single Item recovery for all of our users (script the same to run twice daily to enable SIR for newly created accounts)
    5-Set the "Keep Deleted Items" on the Live DB(s) to 90 days and the Archive DB(s) to 7 Years
    6-We are NOT using Legal Hold or plan to use it except on per as need basis
    Are we accomplishing the following:
    1-Items are automatically archived after 90 days
    2-Items archived now have a 7year retention based on the "keep deleted items" set for the archive DB(s)
    3-Items copied back to the live mailbox by a user will be returned to the archive database the next time the folder assistant runs against this user account (based on load or if run manually)
    4-Hard deleted items by a user is recoverable as long as the email record is within the retention period set at the database where it resides.
    5-Hard deleted items are recoverable using MFCMapi or by a restore.
    6-Items are permanently purged on the archive DB(s) after 7 years.
    Any input, ideas, recommendations, clarifications would be greatly valued and appreciated.  
    Ash

    Thanks CodexCZ,
    So, SIR will "kind of" do the same as the retention tag except I can use different durations based on the limits on each DB? am I correct?
    thanks again.
    Ash

  • Script to calculate space usage at the database level.

    Hi,
    Can someone provide me a script to calculate the Total space, Used space and Free space for all tablespaces within a database??
    I have been trying to use the below combinations in my queries and both provide different results..
    1) dba_data_files & dba_free_space
    select t.tablespace, t.totalspace as " Totalspace(MB)", round((t.totalspace-fs.freespace),2) as "Used Space(MB)", fs.freespace as "Freespace(MB)", round(((t.totalspace-fs.freespace)/t.totalspace)*100,2) as "% Used", round((fs.freespace/t.totalspace)*100,2) as "% Free" from (select round(sum(d.bytes)/(1024*1024)) as totalspace, d.tablespace_name tablespace from dba_data_files d group by d.tablespace_name) t, (select round(sum(f.bytes)/(1024*1024)) as freespace, f.tablespace_name tablespace from dba_free_space f group by f.tablespace_name) fs where t.tablespace=fs.tablespace order by t.tablespace;
    2) dba_extents & dba_free_space
    select t.tablespace, t.totalspace as " Totalspace(MB)", round((t.totalspace-fs.freespace),2) as "Used Space(MB)", fs.freespace as "Freespace(MB)", round(((t.totalspace-fs.freespace)/t.totalspace)*100,2) as "% Used", round((fs.freespace/t.totalspace)*100,2) as "% Free" from (select round(sum(d.bytes)/(1024*1024)) as totalspace, d.tablespace_name tablespace from dba_extents d group by d.tablespace_name) t, (select round(sum(f.bytes)/(1024*1024)) as freespace, f.tablespace_name tablespace from dba_free_space f group by f.tablespace_name) fs where t.tablespace=fs.tablespace order by t.tablespace;
    Thanks in advance,
    regards,
    Arul S

    -- check the total,used and free space in tablespaces
    select     a.TABLESPACE_NAME,
         a.BYTES MB_total,
         b.BYTES MB_free,
         b.largest,
         a.BYTES-b.BYTES MB_used,
         round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) percent_used
    from
              select      TABLESPACE_NAME,
                   sum(BYTES)/1048576 BYTES
              from      dba_data_files
              group      by TABLESPACE_NAME
         a,
              select      TABLESPACE_NAME,
                   sum(BYTES)/1048576 BYTES ,
                   max(BYTES)/1048576 largest
              from      dba_free_space
              group      by TABLESPACE_NAME
         b
    where      a.TABLESPACE_NAME=b.TABLESPACE_NAME
    order      by ((a.BYTES-b.BYTES)/a.BYTES) desc;
    Regards
    Asif Kabir

  • Database Level Security not working ???

    The 10 g (10.1.2.1) documentation states the following:
    Chapter 7 Controlling access to information:
    "Regardless of the access permissions and task privileges that you set in Discoverer Administrator, a Discoverer end user only sees folders if that user has been granted the following database privileges (either directly or through a database role):
    ex: SELECT privilege on all the underlying tables used in the folder "
    So how come a folder (view in my case - not table) cannot be queried directly by a user, but the folder still shows up a choice when building a report using PLUS ? I am misreading the above ? For is sounds lilke to me if the user account does not have SELECT privilege then they will not see the folder in Discoverer ?
    Anyone run into the same issue or have an explanantion ?
    thanks
    OBX

    I think the user has access to see all the folders in the business area in Discoverer if he has permission to do so. This is a Discoverer level security to filter people who should not have access to the business area at all. You'll find that although they can see these Discoverer folders because the permission is set in Discoverer Administrator, that the database tables they are based on will not allow the users to see any of the data if they don't have those rights at the database level.

  • SYS user connects at database level, is it correct?

    My senior colleague has given me following information about the sys user. I want to know, is it correct?
    Since SYS user connects at the database level, therefore, on killing the active session of the SYS user,only the current statement is cancelled. The database session does not disconnect. Instead it continues to run the remaining statements in the script file in case we are running a script file containing a lot of SQL statements.
    Moazzam

    Moazzam wrote:
    My senior colleague has given me following information about the sys user. I want to know, is it correct?
    Since SYS user connects at the database level, therefore, on killing the active session of the SYS user,only the current statement is cancelled. The database session does not disconnect. Instead it continues to run the remaining statements in the script file in case we are running a script file containing a lot of SQL statements.Running a SQL script very likely means SQL*Plus is used. One of two types of Oracle sessions will be created via sqlplus. A dedicated session. Or a shared server session.
    A dedicated session can also be local (sqlplus connects "directly" to the dedicated server process), or remote (sqlplus connects via tcp/ip to the dedicated server process).
    A server session is usually "killed" using the alter system kill session command. Despite the differences between shared and dedicated server connections, the end result is the same. The session terminates abnormally (session UGA will be released, session will be cleaned up, rolled back, etc) - and the session ceases to exist.
    So irrespective of how that sqlplus session runs that script - the session, when killed, will cause a sqlplus failure. And no subsequent script commands would be executed by that Oracle session.
    What can happen is that sqlplus continues running, continues reading the script, and continues submitting commands to be executed. However, with the server session killed, there is no server process to service the commands submitted by the sqlplus client. In this case, sqlplus will throw the error "+SP2-0640: Not connected+" after each of the commands it tries to execute after the server session was killed.
    The only time when sqlplus will be able to continue is when the session is not killed, but interrupted. The Oracle Call Interface (OCI) supports a OCIBreak() call - allowing the client to interrupt-and-abort the request that its server session is currently executing.
    For example, sqlplus sends a OCIBreak() while it waits for a server response (e.g. waiting for the answer to a SQL select query), when the user presses Ctrl-Break to abort that request.
    In this case, the session still exists - and the client can issue a new request that the session will service. But an OCIBreak() cannot be triggered (to my knowledge) externally from another Oracle session. You need to send the client process a "break request" (like a Ctrl-Break keystroke) in order to trigger that client process to make an OCIBreak() call to Oracle and interrupt its server process.

  • In ecatt - how to check at database level using ABAP

    Hi,
    How to check at database level using ABAP in Ecatt tool.
    say,for example I want to check a particular sales order is invoiced or not ,at the database level and if it is invoiced I have stop proceeding to invoicing of that sales order number.
    Could anybody suggest on this with an example?
    thanks.

    Hi,
    you can use the command GETTAB to access single db records.
    Full specified or partitial specified keys can be use at GETTAB. It will return always only one record, also if a couple could match your selection.
    For more advanced scenarios you can also use eCATTs Inline ABAP. In a block between the commands ABAP. ENDABAP. you can code ABAP statements, e.g. SELECT ... INTO TABLE ...
    eCATT script parameters of type 'V' defined in that script using ABAP/ENDABAP will be transfered into the ABAP block and back to script after ABAP perform.
    Best regards
    Jens

  • Solving "COMMIT business rules" on the database server

    Headstart Oracle Designer related white paper
    "CDM RuleFrame Overview: 6 Reasons to get Framed"
    (at //otn.oracle.com/products/headstart/content.html) says:
    "For a number of business rules it is not possible to implement these in the server
    using traditional check constraints and database triggers. Below you can find two examples:
    Example rule 1: An Order must have at least one Order Line ..."
    But, one method exists that allows solving "COMMIT rules" completely on the database level.
    That method consists of the possibility of delaying the checking of the declarative constraints (NOT NULL, Primary Key, Unique Key, Foreign Key, Check Constraints) until the commit
    (that method was introduced first in the version 8.0.).
    E.g. we add the field "num_emps" to the DEPT table, which always has the value of the number
    of the belonging EMP rows and add DEFERRED CK which uses the values from that field:
    ALTER TABLE dept ADD num_emps NUMBER DEFAULT 0 NOT NULL
    UPDATE dept
    SET num_emps = (SELECT COUNT (*) FROM emp WHERE emp.deptno = dept.deptno)
    DELETE dept WHERE num_emps = 0
    ALTER TABLE dept ADD CONSTRAINT dept_num_emps_ck CHECK (num_emps > 0) INITIALLY DEFERRED
    Triggers that insure the solving of the server side "COMMIT rules" are fairly simple.
    We need a packed variable that is set and reset in the EMP triggers and those value
    is read in the bur_dept trigger (of course, we could have place the variable in the package
    specification and change/read it directly, thus not needing the package body,
    but this is a "cleaner" way to do it):
    CREATE OR REPLACE PACKAGE pack IS
    PROCEDURE set_flag;
    PROCEDURE reset_flag;
    FUNCTION dml_from_emp RETURN BOOLEAN;
    END;
    CREATE OR REPLACE PACKAGE BODY pack IS
    m_dml_from_emp BOOLEAN := FALSE;
    PROCEDURE set_flag IS
    BEGIN
    m_dml_from_emp := TRUE;
    END;
    PROCEDURE reset_flag IS
    BEGIN
    m_dml_from_emp := FALSE;
    END;
    FUNCTION dml_from_emp RETURN BOOLEAN IS
    BEGIN
    RETURN m_dml_from_emp;
    END;
    END;
    CREATE OR REPLACE TRIGGER bir_dept
    BEFORE INSERT ON dept
    FOR EACH ROW
    BEGIN
    :NEW.num_emps := 0;
    END;
    CREATE OR REPLACE TRIGGER bur_dept
    BEFORE UPDATE ON dept
    FOR EACH ROW
    BEGIN
    IF :OLD.deptno <> :NEW.deptno THEN
    RAISE_APPLICATION_ERROR (-20001, 'Can''t change deptno in DEPT!');
    END IF;
    -- only EMP trigger can change "num_emps" column
    IF NOT pack.dml_from_emp THEN
    :NEW.num_emps := :OLD.num_emps;
    END IF;
    END;
    CREATE OR REPLACE TRIGGER air_emp
    AFTER INSERT ON emp
    FOR EACH ROW
    BEGIN
    pack.set_flag;
    UPDATE dept
    SET num_emps = num_emps + 1
    WHERE deptno = :NEW.deptno;
    pack.reset_flag;
    END;
    CREATE OR REPLACE TRIGGER aur_emp
    AFTER UPDATE ON emp
    FOR EACH ROW
    BEGIN
    IF NVL (:OLD.deptno, 0) <> NVL (:NEW.deptno, 0) THEN
    pack.set_flag;
    UPDATE dept
    SET num_emps = num_emps - 1
    WHERE deptno = :OLD.deptno;
    UPDATE dept
    SET num_emps = num_emps + 1
    WHERE deptno = :NEW.deptno;
    pack.reset_flag;
    END IF;
    END;
    CREATE OR REPLACE TRIGGER adr_emp
    AFTER DELETE ON emp
    FOR EACH ROW
    BEGIN
    pack.set_flag;
    UPDATE dept
    SET num_emps = num_emps - 1
    WHERE deptno = :OLD.deptno;
    pack.reset_flag;
    END;
    If we insert a new DEPT without the belonging EMP, or delete all EMPs belonging to a certain DEPT, or move all EMPs of a certain DEPT, when the COMMIT is issued we get the following error:
    ORA-02091: transaction rolled back
    ORA-02290: check constraint (SCOTT.DEPT_NUM_EMPS_CK) violated
    Disvantage is that one "auxiliary" column is (mostly) needed for each "COMMIT rule".
    If we'd like to add another "COMMIT rule" to the DEPT table, like:
    "SUM (sal) FROM emp WHERE deptno = p_deptno must be <= p_max_dept_sal"
    we would have to add another column, like "dept_sal".
    CDM RuleFrame advantage is that it does not force us to add "auxiliary" columns.
    We must emphasize that in real life we would not write PL/SQL code directly in the database triggers, but in packages, nor would we directly use RAISE_APPLICATION_ERROR.
    It is written this way in this sample only for the code clarity purpose.
    Regards
    Zlatko Sirotic

    Zlatko,
    You are right, your method is a way to implement "COMMIT rules" completely on the database level.
    As you said yourself, disadvantage is that you need an extra column for each such rule,
    while with CDM RuleFrame this is not necessary.
    A few remarks:
    - By adding an auxiliary column (like NUM_EMPS in the DEPT table) for each "COMMIT rule",
    you effectively change the type of the rule from Dynamic (depending on the type of operation)
    to a combination of Change Event (for updating NUM_EMPS) and Static (deferred check constraint on NUM_EMPS).
    - Deferred database constraints have the following disadvantages:
    When something goes wrong within the transaction, then the complete transaction is rolled back, not just the piece that went
    wrong. Therefore, it becomes more important to use appropriate commit units.
    There is no report of the exact row responsible for the violation nor are further violations either by other rows or of other
    constraints reported.
    If you use Oracle Forms as a front end application, the errors raised from deferred constraints are not handled very well.
    - CDM discourages the use of check constraints. One of the reasons is, that when all tuple rules are placed in the CAPI,
    any violations can be reported at the end of the transaction level together with all other rule violations.
    A violated check constraint would abort the transaction right away, without the possibility of reporting back other rule violations.
    So I think your tip is a good alternative if for some reason you cannot use CDM RuleFrame,
    but you'd miss out on all the other advantages of RuleFrame that are mentioned in the paper!
    kind regards, Sandra

  • How to view the table at the application level

    Dear All,
    How to view the table in the Data Dictionary at the application level, If a table is created at the database level by using CREATE statement.
    code/
    create table zmard as select * from sapone.mard where 1 = 2
    /code
    I would like to view the table above, which is created at the Oracle database level in the Data Dictionary.
    can anyone guess the solution.
    Best wishes
    Mahesh

    Hi
    U should create a program using SQL native in order to select and show the data.
    By SE11 or directly in the program u can define a structure like your table:
    DATA: BEGIN OF W_ZMARD,
                  FIELD,
              END     OF W_ZMARD.
    EXEC.
       OPEN CURSO FOR SELECT * FROM ZMARD
    ENDEXEC.
    DO.
      EXEC.
         FETCH NEXT CURSOR INTO :W_ZMARD
      ENDEXEC.
      IF SY-SUBRC <> 0.
         EXIT.
      ENDIF.
      WRITE: / W_ZMARD-FIELD,
    ENDDO.
    EXEC.
      CLOSE CURSOR
    ENDEXEC.
    I don't know if it's possible to create a view in SE11, because it's needs a table just defined in SE11, u can create a new view ZMARD based on MARD but I don't believe it'll use your table.
    Max

  • How to send an email from the database

    i have create a post insert trigger at the database level for the emp table and that trigger call a procedure and that procedure send a email to the new employee, i the trigger send the firt name , second name , last name and the email address to the procedure .so my questions is about the command that will send the email to the new employee ,and the send email will look like this
    Dear "fist name "second name" "last name"
    welocme to..........................................................

    This thread could help answer some of your questions:
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:2391265038428
    C.

  • How to implement Oracle Label Security with Oracle8.1.5 database

    I want some fields in some tables which could not be even viwed by DBA..
    I am working on Oracle Server 8.1.5
    If possible it should be in the same database,same schema but different schema may also work..
    Please help

    I don't think this is going to be possible.
    When you register a crawler, you have to declare it as one of three types: Public, Identity-based or Attribute-based.
    The database crawler is registered as attribute-based, and therefore must be used with a suitable authorization manager.
    I guess in theory you could create a new authorization manager class which queries active directory to get the appropriate security attributes for a user (corresponding to the security attributes crawled from the database), but I suspect it might be easier to figure out a way to copy AD attributes into a database table (perhaps updating the table once a day via a nightly crawl of AD) and then use the standard database authorization manager.

  • Implementing A Group BY outside the database

    Guys,
    I have out today that I need to provide an implementation which allows dynamic Group by's to be proceessed in Java. I have looked at Hibernate but it does not support what I have in mind.
    Ideally I have a result set of 1000 rows (say). I need to be able to add group by as well as aggregation to this result set. This can not be done at the database level.
    Any ideas on how to approach - perhaps a database framwork which allows results to be processed.
    Thanks

    DuffyMo,
    I have been visiting this site for well over 2 years and it is good to see you still particpating.
    Thank you.
    Ideally Hiberante would be great but I could not get anything concrete to work - The reason I ask to do this way - is becuase we are using a Microsoft stored procedure which has all the data in a non aggregarted realtion (table).
    My application needs to display this data but there are rules like Grouping which need to be applied.
    I think it is very difficuklt to write a group by in Java - Do you have any knowledge of wether Hibernate can process a stored proecdrue, possible many times (since it might be a group by on multiple columns)
    Thanks

  • Refreshing mview is hanging after a database level gather stats

    hi guys,
    can you please help me identify the root cause of this issue.
    the scenario is this:
    1. we have a scheduled unix job that will refresh an mview everyday, from tuesday to saturdays.
    2. database maintenance being done during weekends (sundays), gathering stats at a database level.
    3. the refresh mview unix job apparently is hanging every tuesdays.
    4. our workaround is to kill the job, request for a schema gather stats, then re-run the job. and voila, refresh mview will be successful then.
    5. and the rest of the weekdays until saturdays, refresh mview is having no problems.
    we already identified during testing that the scenario where the refresh mview is failing is when after we are gathering stats in a database level.
    during gather stats in a schema level, refresh mview is successful.
    can you please help me understand why we are failing refreshing mview after we gather stats in the database level??
    we are using oracle 9i
    the creation of the mview goes something like below:
    create materialized view hanging_mview
    build deferred
    refresh on demand
    query rewrite disabled
    appreciate all your help.
    thanks a lot in advance.

    1. we have a scheduled unix job that will refresh an mview everyday, from tuesday to saturdays.
    2. database maintenance being done during weekends (sundays), gathering stats at a database level.
    3. the refresh mview unix job apparently is hanging every tuesdays.
    4. our workaround is to kill the job, request for a schema gather stats, then re-run the job. and voila, refresh mview will be successful then.
    5. and the rest of the weekdays until saturdays, refresh mview is having no problems.
    You know Tuesday's MV refresh "hangs".
    You don't know why it does not complete.
    You desire solution so that it does complete.
    You don't really know what is is doing on Tuesdays, but hope automagical solution will be offered here.
    The ONLY way I know how to possibly get some clues is SQL_TRACE.
    Only after knowing where time is being spent will you have a chance to take corrective action.
    The ball is in your court.
    Enjoy your mystery!

Maybe you are looking for

  • Saving a reader extended PDF form as read-only

    Before posting this, I've searched for hours in the forums and the internet on how to accomplish this but I just can't find the solution. I've created a simple form in Acrobat and saved it with reader extended attributes so it can be saved by my coll

  • COOIS export to excel sheet.

    Hello All, When i am trying to export output data from COOIS to EXCEL, I found data mismatch between COOIS report and downloaded EXCEL file. Please help me in this issue. Thanks in advance Pushparaj

  • Cost center exclusion

    hi all, I am having an issue regarding Cost center exclusion from line item report KSB1. The user has done a Vendor Invoice posting, with  2 CO Objects (Internal order - non statistical and Cost center). I have done a similar posting using Internal o

  • IDOC and RFC importing error

    when i try to import idoc or rfc  iget this kind of error              **Ready for import** Import started... BAPI_COMPANYCODE_GETDETAIL:   + com.sap.aii.ibrep.sbeans.upload.RemoteUploadException: The function module "DD_DOMA_GET" not released for 'r

  • ALSB 2.6 generated WSDL is Invalid?

    Hi All, When I am trying to use WSDL generated by AquaLogic Service Bus 2.6 into portal using a service Control, it is giving me an exception stating that WSDL is invalid and some ..... long exception. How come WSDL generated by the tool is invalid??