Processing large LDAP query results

I need to perform batch lookups of employees in an Active Directory instance - thousands of employees at a time - from within an ALBPM process based on a group filter. (I.e., get all employees in group X.) This works fine for small groups, but any query against a group with more than 1500 members returns zero results.
By default, AD limits the size of result sets to 1500. I have worked around this on other platforms by specifying query parameters that allow large result sets to be "paged." This allows results to be returned one chunk at a time over a single connection. Is there a way to do this in ALBPM, such as by including additional parameters in the "lookup" method of an LDAP component?
Thanks for any assistance.

Are you using your application from Studio Engine or Enterprise engine?
Regards
Right Chord

Similar Messages

  • Cache an LDAP query result in a Map Object

    Is there a way to perform a single LDAP query and store it in some type of an indexed list Object in memory. Specifically I need to populate both LDAP manager and managerFullName for an LDAP user object based on an employeenumber query.
    I don't want to query LDAP for every user object. I would like to submit one search such as (objectclass=inetorgperson) and store the result in an indexed list in memolry using employeenumber as the key. This way I only need to query the indexed list object for each user entry.
    Is this possible?

    No this is not possible.
    The only way to do this is to use a java class you write yourself. But and a major but: if you do not stay in the same place in IDM (form or workflow) you will lose the content because the object will be garbage collected when you change.
    The other thing is: how much will you gain? The ldap server can probably return the result far quicker then you can iterate through the list to find the entry.
    WilfredS

  • Display data of query in Analysis Process Designer mismatch query result

    Hi experts,
    I am having an issue with APD.when ii am display data of the query in apd i am having wrong values for key figures.
    any solutions?
    Thanks

    Hi,
    after doing a check by restricting data in the query, I see that the condition I have set in the query, to only give me the data where keyfigure X <= 2, the DSO is updated with all the values. So it seems that conditions in queries are ignored by the APD. Does anybody know anything more about this? Then I can understand why the job was taking so much time...
    Regards,
    Øystein

  • Large query result set

    Hi all,
    At the moment we have some java classes (not ejb - cmp/bmp) for search in
    our ejb application.
    Now we have a problem i.e. records have grown too high( millions ) and
    sometimes query results in retrieval of millions of records. It results in
    too much memory consumtion in our ejb application. What is the best way to
    address this issue.
    Any help will be highly appreciated.
    Thanks & regards,
    Parvez

    you can think of following options
    1) paging: read only few thousands at a time and maintain a index to page
    through complete dataset
    2) caching!
    a) you can create a serialized data file in server to cache the result set
    and can use that to browse through. you may do on the fly
    compression/uncompression while sending data to client.
    b) applet based solution where caching could be in client side. Look in
    http://www.sitraka.com/software/jclass/cs_ims.html
    thanks,
    Srinivas
    "chauhan" <[email protected]> wrote in message
    news:[email protected]...
    Thanks Slava Imeshev,
    We already have search criteria and a limit. When records exceeds thatlimit
    then we prompt user that it may take sometime, do you want to proceed? If
    he clicks yes then we retrieve those records. This results in lot ofmemory
    consumtion.
    I was thinking if there is some way that from database I can retrieve some
    block of records at a time rather the all records of a query. I wander how
    internet search sites work, where thousnds of sites/pages match criteriaand
    client can move back & front on any page.
    Regards,
    Parvez
    "Slava Imeshev" <[email protected]> wrote in message
    news:[email protected]...
    Hi chauhan,
    You may want to narrow search criteria along with processing a
    limited number of resulting records. I.e. if the size of the result
    is bigger than a limit, you stop fetching results and notify the client
    that search criteria should be narrowed.
    HTH.
    Regards,
    Slava Imeshev
    "chauhan" <[email protected]> wrote in message
    news:[email protected]...
    Hi all,
    At the moment we have some java classes (not ejb - cmp/bmp) for
    search
    in
    our ejb application.
    Now we have a problem i.e. records have grown too high( millions ) and
    sometimes query results in retrieval of millions of records. It
    results
    in
    too much memory consumtion in our ejb application. What is the best
    way
    to
    address this issue.
    Any help will be highly appreciated.
    Thanks & regards,
    Parvez

  • Critical: LDAP: query DNS result DNS Hard Error looking up e

    I am not having any luck when trying to connect to all 3 of our LDAP Servers...I get this error in the logs:
    Critical: LDAP: query DNS result DNS Hard Error looking up MyServer.Mydomain.com (A): NXDomain
    It is open through our Firewalls. I don't even see the Test Query reach our Firewalls...any suggestions what I am doing wrong?
    We were using Surfcontrol and it worked fine... :?:

    In Surfcontrol I put the IP without the DN and the query returns all the users.
    In IronPort when I put the IP without the DN and do an Accept query using my email address in the Recipient Address I get the above error.

  • Create Materialized View based on Results from LDAP Query

    Hi -- I'm trying to create a materialized view based on results from an LDAP query. Unfortunately, it looks like a materialized view can't be created based on a stored procedure, which is where the LDAP results are obtained (using nested loops).
    Does anyone have any idea how to do this without first kicking off a stored procedure that populates a temp table which would be used to create the materialized view? I'm trying to minimize the steps that the DBA's will need to go through when refreshing this new view.
    Thanks,
    ~Christine

    Can you give us more details about the stored procedure you're calling. It will help to know what parameters are involved and what data types they are.
    Off the top of my head though it looks like, at the very least, you would need a stored function that calls the stored procedure. I don't think there is any way to call stored procedures from CREATE ... commands. If you're going to create a stored function anyway ... well, you might as well just create a procedure that inserts values into a regular table instead of fussing with functions and materialized views. You'll probably want to schedule your new procedure to run periodically since it sounds like you'll need the values refreshed from time to time.

  • ES2 LDAP Query, LDAPService, Foundation.

    I have a problem with the LDAP Query activity in LiveCycle ES2. I have written a process in LiveCycle ES2 SP1 workbench which runs as expected but when I export it in an LCA to my clients LiveCycle server it behaves differently. The process appears to run the same but the LDAP queries do not return any data. When I run this process there are no errors in the server log or in the process recording just a failure to return any data.
    I have triple checked the LDAP configuration in the Adminui, Workbench LDAP component and LDAP query activity and found all to be configured correctly. The strange thing is I can fix it by dragging on a new activity and setting and it to an LDAP Query. I then configure it exactly the same as the original component and then it functions as expected, successfully returning data.
    Has anyone else seen this problem before and can anyone suggest a solution? I have a much larger application with many processes containing many LDAP Query components so replacing each of them after each import of the LCA is not possible. I have tried creating different LCA’s and importing them onto different LiveCycle servers but always with the same result.

    Is the problem seen only with this group or other groups also do not show group members

  • Export query results into .csv file?

    Hello I have a T-SQL script that gets row counts for a specified date range and then needs to loop (by incrementing +1 day to get the next day's counts) for a large date range.  I'm aiming to output & append each query results day counts
    into a .csv file via a SQL Agent job since this will take quite a while to complete.
    Would using the following as an example template...
    INSERT INTO OPENROWSET('Microsoft.ACE.OLEDB.12.0','Text;Database=D:\;HDR=YES;FMT=Delimited','SELECT * FROM [FileName.csv]')
    SELECT Field1, Field2, Field3
    FROM DatabaseName
    ...be the method or something else?
    If this is good to use I've tried running this but get the following error:
    Msg 7357, Level 16, State 2, Line 76
    Cannot process the object "SELECT * FROM [FileName.csv]". The OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" indicates that either the object has no columns or the current user does not have permissions
    on that object.
    Thanks in advance.

    Hi Techresearch7777777,
    The error in your post says that the file FileName.csv has to be created with the column names in the first row. Like:
    Field1,Field2,Field3
    Either you can create a schema.ini file under the same folder:
     [FileName.csv]
     Format=CSVDelimited
     ColNameHeader=False
     Col1=Field1 [DataType]
     Col2=Field2 [DataType]
     Col3=Field3 [DataType]
    For the [DataType],you can reference
    Schema.ini File (Text File Driver)
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • Exporting BEx Query results to CSV or other files.

    Hi All,
    We have a report designed in WAD, which uses a BEx query to display department’s information. Because of amount of data in the results it crashes when u get all the Free Characters into ROWS(Error is : Result set too large(552244 cells); data retrieval restricted by configuration (maximum =500000 cells).  It is a requirement from the user to have all the free characteristics in the rows( which I can understand is not the right way to go ahead). As it is crashing when you add last characteristic(WBS Purpose)  into the rows, they aren’t able to export data into spread sheet from the results. (as there are no results)
    Now my question is…..is there any way we can export the results from BEX query into any format for example text or csv? If yes, can we schedule that process? Please note that this characteristics are displayed as hierarchies.
    Am new to SAP BW and its been only six months since I have been in this role. Any help would be much appreciated???
    I looked online and some where in the forum it was mentioned of using program RSCRM_BAPI, but looks like it needs some sort of downloads from SAP.
    Is there any other way to export the BEx query results?
    Cheers
    Sandeep

    Hello Sandeep,
    You can use the Item "Context Menu" in your WAD and in the properties by default you can see the Options:
    Export to Excel & Export to CSV will be there and also there are many options which you wanted to see.
    It works for you!!!
    With Regards,
    PJ.

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Saving query results to a flat file

    Hello Experts!
    We have a very specific issue on our current project and I would like to know if any of you have ever done something similar. We are taking query results from BW (after complex calculations, some based on SY-DATE) and saving them to flat files to transfer to a SQL database structure on the Enterprise Portal. From here, the portal team renders the information into more "static" dashboards that include mouse over features and even limited drilldown (i.e. no matter where a user clicks the report always drills down on profit center)
    There are many reasons why the model is set up as such (mostly training of executive level end users), and even though it doesn't mesh with the idea that BW could do this work on its own, we have to work with what we have.
    We have come up with 3 possible ways to save this data to flat files and hopefully someone can tell us which might be the most effective.
    1.) Information Broadcasting. If we broadcast XML files to the portal, will the portals team be able to read that data into a SQL database? Is there another way to use broadcasting to create and send a flat file to specific location?
    2.) RSCRM_BAPI. This transaction seems to not support texts.
    3.) Open Hub. In order to get the open hub to work, we first have to save all of our query results to direct write data store objects using APD. (calculations based on current date, for example, would require daily full loads to the underlying data providers otherwise.)
    What is the best way to accomplish this? Is information broadcasting capable of doing this?
    Thanks!

    Hi Adam,
    Do you have to use flat files to load the information to a SQL database? (if so maybe someone else has a suggestion on which way would be the best).
    I try to stay away from flat file uploads as there is a lot of manual work involved. May I suggest setting up a connection to your SQL table in the DBCON table and then writing a small abap program to push data into the SQL database (which you can automate).
    1) Use APD to push data into a table within BW.
    2) Go to transaction SM30 > table DBCON and setup a new entry specifying the conncection parameters to your SQL database.
    3) In SE38 Write an ABAP program along the folloing lines (assuming the connection you set in DBCON is named conn1:
    data: con_name like dbcon-con_name
    con_name = 'conn1'.
    exec sql.
      set connection :con_name
    endexec.
    ****have a select statement which reads data from your table which the data is saved from the APD into an internal table**********
    ****loop on the internal table and have a SQL insert statement to insert the records into the SQL table as below******
    exec sql.
    insert into <SQL TABLE> (column names seperated by ,)
    values (column names of the internal table seperated by ,)  if the internal table is named itab the columns have to be specified as :itab-column1
    If you decide to take this approach you may find more info on DBCON and the process in sdn. Good Luck!
    endexec.

  • Need suggestion for Column update in query results

    While generating reports using Oracle 10g SQL Query, we need to update the few of columns data with business calculations. We are processing large amount of data. Kindly suggest us, for the best method to achieve this.

    i don't know about Oracle 10 SQL Query but i wouldn't mix reporting with data calcuations which is stored persistent in the database. I would separate them, e.g. you could create a database-job to execute your updates at a specific time each day.
    hope this helps

  • Export query results to flat file with dynamic filename

    Hi
    Can anybody can point me how to dynamic export query serults set to for example txt file using process flows in OWB.
    Let say I have simple select query
    select * from table1 where daterange >= sysdate -1 and daterange < sysdate
    so query results will be different every day because daterange will be different. Also I would like to name txt file dynamicly as well
    eg. results_20090601.txt, results_20090602.txt, results_20090603.txt
    I cant see any activity in process editor to enter custom sql statment, like it is in MSSQL 2000 or 2005
    thanks in advance

    You can call existing procedures from a process flow the procedure can create the filename with whatever name you desire. OWB maps with file as target can also create a file with a dynamic name defined by an expression (see here ).
    Cheers
    David

  • Query result caching on oracle 9 and 10 vs indexing

    I am trying to improve performance on oracle 9i and 10g.
    We use some queries that take up to 30 minutes to execute.
    I heard that there are some products to cache query results.
    Would this have any advantage over using indexes or materialized views?
    Does anyone know any products that I can use to cache the results of this queries on disk?
    Personally I think that by using the query result caching I would reduce the cpu time needed to process the query.
    Is this true?

    Your message post pushes all the wrong buttons starting with the fact that 9i and 10g are marketing labels not version numbers.
    You don't tune queries by spending money and throwing resources at them. You tune them by identifying the problem queries, running explain plans, visualizing their output using DBMS_XPLAN, and addressing the root cause.
    If you want help post full version numbers, the SQL statements, and the DBMS_XPLAN outputs.

  • Processing Large Files using Chunk Mode with ICO

    Hi All,
    I am trying to process Large files using ICO. I am on PI 7.3 and I am using new feature of PI 7.3, to split the input file into chunks.
    And I know that we can not use mapping while using Chunk Mode.
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    Thanks in Advance,
    - Pooja.

    Hello,
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    According to this blog:
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    The following limitations apply to the chunk mode in File Adapter
    As per the above screenshots, the split never cosiders the payload. It's just a binary split. So the following limitations would apply
    Only for File Sender to File Receiver
    No Mapping
    No Content Based Routing
    No Content Conversion
    No Custom Modules
    Probably you are doing content conversion that is why it is not working.
    Hope this helps,
    Mark
    Edited by: Mark Dihiansan on Mar 5, 2012 12:58 PM

Maybe you are looking for

  • How to create a Business Partner with internal number assignment

    Hello, We are using a CRM 7.0 system. We want to create a new Business Partner using LSMW. We chose IDoc basic type CRMXIF_PARTNER_SAVE_M03 to perform it. We want to create it with internal number assignment (we don't have BP ID or BP GUID). Unfortun

  • Hanging + App error 200 problem

    Hello I have a Curve 8520 (my brothers) which is a total mess! The phone is always hanging we can barley use it, I am trying to back up the phone in order to wipe it and update it but I am unable to. Also, i am getting the App error 200. I have been

  • Posible to start a report without a button, only i Select List ???

    Hi, another problem :) I have i lot of select lists in my aplication and the selectet items in the list are the values for my report. The problem is that i create a button for the select lists and when i click on the button the report was filled. Is

  • Error- No partner profile could be found [Idoc(KU) to File(LS)

    Hi Experts, I am creating Idoc to file scenario i.e KU to LS I have already created an outbound partner profile using transaction WE20. SNTPRT- Partner number - 1100024          RCVPRN Partner number  - PEVCLNT100 Partner Type- KU                    

  • Photo options on iPhone - question

    What is the difference in the Camera Roll, Photo Library, and iStream? I'm confused how to best manage photos taken on my iPhone between all these.