Best approach to serve same data among many users from POJO DC

Hi,
This is a question related to what is the best possible approach for the following scenario.
A POJO DC that retrieves a collection (of approximately 200 records). The application shows the same records to all application users. The collection needs to be updated from time to time when there are new objects and this needs to be visible to all users in DB.
Currently, for each user that connects to the application the POJO DC executes the query and deliver results. This works fine when just little amount of users but when several users wants to access the app this become very expensive.
I am thinking to build a programatically EO - VO and then expose it in a SharedApplicationModule where thanks to the BC Cache I will save and share data among users. I will need to poll make some poll and refresh the collection when new results comes thru.
Is this a good approach? What are your thoughts or how would be this better implemented?
Many thanks.
JDeveloper 12.c

Hi,
why do you use a POJO DC at all when the solution is to use a shared ADF BC Application Module?  Sample 156 would become your friend:
https://blogs.oracle.com/smuenchadf/resource/examples#156
Frank

Similar Messages

  • Best Approach to Compare Current Data to New Data in a Record.

    Which is the best approach in comparing current data to new data for every single record before making updates?
    What I have is a table with 5,000 records that need to be updated every week with data from a data feed, but I dont want to update every single record week after week. I want to update only records with different data.
    I can think of these options:
    1) two cursors (one for current data and one for new)
    2) one cursor for new data and one varray for current data
    3) one cursor for new data and a select into for current data.
    Thanks for your help.

    I don't recommend it over merge, but in theory you could use a checksum (OWA_OPT_LOCK.checksum()) on the rows and if they differ then copy the new row into the destination table. Or you might be able to use rowscn to see if things have changed.
    Like I said, I don't know that I 'd take either approach over merge, but they are options.
    Gaff
    Edited by: Gaff on Feb 11, 2009 3:25 PM
    Actually, rowscn between 2 tables may not be an option. I know you can turn a rowscn into a time value so if that rowscn is derived from anything but the column values for the row that would foil that approach.

  • Best approach to replicate the data.

    Hello Every one,
    I want to know about the best approach to replicate the data.
    First i clear the senario , i have one oracle 10g enterprise edition database and 20 oracle 10g standerd edition databases.The enterprise edition will run at center and the other 20 standered edition databases will run at sites.
    The data will move from center to sites and vice versa not between sites.There is only one schema to replicate with more than 500 tables.
    what will be the best for replication (Updateble MVs or Oracle Streams or any thing else.),its urgentpls.
    Thanx in advance.
    Edited by: user560669 on Dec 13, 2009 11:01 AM

    Hello,
    Yes MV or Oracle Stream are the common ways to replicate datas between databases.
    I think that in your case (you have to replicate a whole Schema) Oracle Streams is interresting (it's not so easy
    to maintain 500 MV).
    But you must care of the type of Edition.
    I'm not sure that Standard Edition allows Advanced replication features. It seems to me (but I may be wrong)
    that Updatable MV is an EE features.
    About the Stream It seems to be available even in SE.
    Please, find enclosed some links about it:
    [http://www.oracle.com/database/product_editions.html]
    [http://www.oracle.com/technology/products/dataint/index.html]
    Hope it can help,
    Best regards,
    Jean-Valentin

  • How many users from an application can connect to a SQL server at a time?

    HI,
        I need to know , How many users from an application can connect to a SQL server at a time? Do we have any settings for this in SQL server for limiting the users?

    This is a difficult question, since it is both technical and legal.
    The absolute maximum number of connections is around 32700, but unless your server is very beefy, it will choke long before that. A connection is not a user - an application can have many connections for the same user. Depending on your license
    model, you may be legally limited to a certain number of users, but this number is not enforced technially.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • What's the best approach to resetting Calendar data on Server?

    I have a database format error in a calendar that I only noticed after the migration to Server on Yosemite.  I'll paste a snippet from the Error Log in at the bottom that shows the error - I've highlighted the description of the problem in red.
    I found a pretty cool writeup from Linc in a different thread, but it's aimed at fixing a similar problem for a local user on their own machine rather that an iCal server like what we're running.  Here's the link to that thread: Re: Calendar crashes on open  For example, does something like Calendar Cleaner work on our server database as well?
    In my case I think I'd basically like to gracefully remove all the Calendar databases from Server and start fresh (all the users' calendars are backed up on their local machines, so they can just import them into fresh/empty calendars once I've cleaned out the old stuff).  Any thoughts on "best approach" would be much appreciated.
    Here's the error log...
    File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twi sted/internet/defer.py", line 1099, in _inlineCallbacks
    2015-01-31 07:14:41-0600 [-] [caldav-0]         result = g.send(result)
    2015-01-31 07:14:41-0600 [-] [caldav-0]       File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python 2.7/site-packages/txdav/caldav/datastore/sql.py", line 3635, in component
    2015-01-31 07:14:41-0600 [-] [caldav-0]         e, self._resourceID
    2015-01-31 07:14:41-0600 [-] [caldav-0]     txdav.common.icommondatastore.InternalDataStoreError: Data corruption detected (Invalid property: GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     VERSION:2.0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CALSCALE:GREGORIAN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     PRODID:-//Apple Inc.//Mac OS X 10.8.2//EN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTART:20121114T215900Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTEND:20121114T232700Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CLASS:PUBLIC
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CREATED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DESCRIPTION:Flight leg 2 of 2 for trip from MSP to LAX\\nhttp://www.google.
    2015-01-31 07:14:41-0600 [-] [caldav-0]      com/search?q=US+29+flight+status\\nBooked on November 8\\, 2012\\n
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTAMP:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LAST-MODIFIED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LOCATION:Sky Harbor International Airport\\, Phoenix\\, AZ
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SEQUENCE:0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     STATUS:CONFIRMED
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SUMMARY:US 29 from PHX to LAX
    2015-01-31 07:14:41-0600 [-] [caldav-0]     URL:http://www.hipmunk.com/flights/MSP-to-LAX#!dates=Nov14,Nov17&group=1&s
    2015-01-31 07:14:41-0600 [-] [caldav-0]      elected_flights=96f6fbfd91,be8b5c748d;kind=flight&locations=MSP,LAX&dates=
    2015-01-31 07:14:41-0600 [-] [caldav-0]      Nov14,Nov16&group=1&selected_flights=96f6fbfd91,
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-CALENDARSERVER-PERUSER-UID:D0737009-CBEE-4251-A288-E6FCE5E00752
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRANSP:OPAQUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACKNOWLEDGED:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACTION:AUDIO
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ATTACH:Basso
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRIGGER:-PT2H
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-APPLE-DEFAULT-ALARM:TRUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-WR-ALARMUID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ) in id: 3405
    2015-01-31 07:14:41-0600 [-] [caldav-0]    
    2015-01-31 07:16:39-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None
    2015-01-31 08:08:40-0600 [-] [caldav-1]  [AMP,client] [calendarserver.tools.purge#warn] Cleaning up future events for principal A95C9DB2-9757-46B2-ADF6-4DECE2728820 since they are no longer in directory
    2015-01-31 08:09:10-0600 [-] [caldav-1]  [-] [twext.enterprise.jobqueue#error] JobItem: 39, WorkItem: 762001 failed: ERROR:  canceling statement due to statement timeout
    2015-01-31 08:09:10-0600 [-] [caldav-1]    
    2015-01-31 08:13:40-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None

    <facepalm>  Well, there you go.  It turns out I was over-thinking this.  The Calendar app on a Mac can manage this database just fine.  Sorry about that.  There may be an easier way to do this, but here's how I did it.
    Use the Calendar.app on a local computer to:
    - Export the corrupted calendar to an ICS file on the local computer (Calendar -> File -> Export -> Export)
    - Create a new local calendar (Calendar -> File -> New Calendar -> On My Mac)
    - Import the corrupted calendar into the new/empty local calendar (Calendar -> File -> Import...)
    - Delete years and years of old events, including the one that was triggering that error message
    - Export the (now much smaller) local calendar to another ICS file on my computer (Calendar -> File -> Export -> Export)
    - Create a new calendar on the server (Calendar -> File -> New Calendar -> in the offending server-based iCal account)
    - Import the edited/fixed/smaller/no-longer-corrupted calendar into the new/empty server calendar (Calendar -> File -> Import...)
    - Make the newly-created iCal calendar the primary calendar (drag it to the top of the list of calendars on the server)
    - Delete the old/corrupted calendar (right-clicking on the bad calendar in the calendar list - you can only delete it once it's NOT the primary calendar any more)

  • Best way to show multiple copies of same data to different users

    Hi,
    I am new to configuring and using Oracle. I have an oracle db installed on a server machine and I loaded it with a set of data. I have multiple users who would all like to access the tables but they need to see unique copies of the data.
    What is my best approach? Should I create a new database instance? If so what is the best way to copy data to the new instance?
    Any help is greatly appreciated.

    Each developer should copy from the 'base' schema to his own the tables he requires to do his work. Give each developer select privileges on the 'base schema' tables so he cannot change them. This way, the developer can refresh his data as needed on his own. Public synonyms for the base schema tables will enable developers to only have to copy tables they wish to modify.
    Creating a separate fully loaded schema for each developer is going to create a never ending load of work for you as developers will be constantly asking you to refresh their schemas. Alternatively, you could create an export of production on a schedule and let the developers do the import to their schema when they wish.
    Basically, these folks are developers and, IMNSHO, should be managing their own development schemas.

  • Best approach to change historichal data???

    Hi experts,
    As a requirement of our project, R/3 will periodically do a material (key) value change upon existing product, that is, update the product code with a new one that substitutes the old one.
    Ex: Material A (code M10013) turns Material B (code M10014).
    We want to realign all historichal data stored in BW from the old material code (M10013) to the new one (M10014).
    Question is whether there is a standard way to do this within BW.
    If not, what would be the best approach to dela with this?
    Thanks in advance for your help,
    Best regards,
    Enric
    Note: I have faced this same requirement in APO, and there's a Realignment functionality to convert CVCs, but I'm not sure if we have a similar tool in BW.

    Hi,
    Use Data Mart Concept.
    Load all the historical data of the old with the new material code to a new ODS
    then again pull the data from the new ODS to the existing one.
    hope this helps,
    regards
    suresh

  • What is the best approach to return Large data from Stored Procedure ?

    no answers to my original post, maybe better luck this time, thanks!
    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
    new farm's Web Application(s)/Service Application(s).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Best approach to return Large data ( 4K) from Stored Proc

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

  • Best Practice for Flat File Data Uploaded by Users

    Hi,
    I have the following scenario:
    1.     Users would like to upload data from flat file and subsequently view their reports.
    2.     SAP BW support team would not be involved in data upload process.
    3.     Users would not go to RSA1 and use InfoPackages & DTPs. Hence, another mechanism for data upload is required.
    4.     Users consists of two group, external and internal users. External users would not have access to SAP system. However, access via a portal is acceptable.
    What are the best practice we should adopt for this scenario?
    Thanks!

    Hi,
    I can share what we do in our project.
    We get the files from the WEB to the Application Server in path which is for this process.The file placed in the server has a naming convention based on ur project,u can name it.Everyday the same name file is placed in the server with different data.The path in the infopackage is fixed to that location in the server.After this the process chain trigers and loads the data from that particular  path which is fixed in the application server.After the load completes,a copy of file is taken as back up and deleted from that path.
    So this happens everyday.
    Rgds
    SVU123
    Edited by: svu123 on Mar 25, 2011 5:46 AM

  • What is the best way to get/change data in R/3 from external interface?

    Hi SAP gurus,
    I have a problem to know what is the best technology to access and maintenance all SAP functionality and data too in R/3 systems.
    Anyone know if connectors (.NET or JCO) are the only solution to manipulate all SAP system data or exist other way?
    One thing more, what is the best connector, with more functionality?
    E.g. The screen painter was made in C++ and is executed by user event in R/3 system, so would like to know if it's exist any way to do the same but replace the screen painter to another custom application?
    Regards

    Hello Vitor
    Not all functions and data are externally accessible. Only those business objects (e.g. like sales order, customer, material) for which BAPIs are available (transaction BAPI ) can be accessed via RFC.
    Regards,
        Uwe

  • Best Practice for serving static files (gif, css, js) from front web server

    I am working on optimization of portal performance by moving static files (gif, css, js) to my front web server (apache) for WLP 10 portal application. I end up with moving whole "framework" folder of the portal WebContent to file system served by apache web server (the one which hosts WLS plugin pointing to my WLP cluster). I use <LocationMatch> directives for that:
    Alias /portalapp/framework "/somewhere/servedbyapache/docs/framework"
    <Directory "/somewhere/servedbyapache/docs/framework">
    <FilesMatch "\.(jsp|jspx|layout|shell|theme|xml)$">
    Order allow,deny
    Deny from all
    </FilesMatch>
    </Directory>
    <LocationMatch "/partalapp(?!/framework)">
         SetHandler weblogic-handler
         WLCookieName MYPORTAL
    </LocationMatch>
    So, now browser gets all static files from apache insted of the app server. However, there are several files from bighorn L&F, which are located in the WLP shared lib: skins/bighorn/ window.css, wsrp.css, menu.css, general.css, colors.css; skins/bighorn/borderless/window.css; skeletons/bighorn/js/ util.js, buttons.js; skeleton/bighorn/css/layout.css
    I have to merge these files into the project and physically move them into apache served file system to make mentioned above apache configuration works.
    However, this approach makes me exposed bunch of framework resources, which I do not to intend to change and they should not be change (only custom.css is the place to make custom changes to the bighorn skin). Which is obviously not very elegant solution. The other approach would be intend to create more elaborate expression for LocationMatch (I am not sure it's entirely possible giving location of these shared resources). More radical move - stop using bighorn and create totally custom L&F (skin, skeleton) - which is quire a lot of work (plus - bighorn is working just fine for us).
    I am wondering what is the "Best Practice Approach" approach recommended by Oracle/BEA - giving the fact that I want to serve all static files from my front end apache server instead fo WLS app server.
    Thanks,
    Oleg.

    Oleg,
    you might want to have a look at the official WLP performance support pattern (Metalink DocID 761001.1 ) , which contains a section about "Configuring a Fronting Web Server Serving WebLogic Portal 8.1 Static Artifacts ".
    It was written for WLP 8.1, but most of the settings / recommendations should also to WLP 10.
    --Stefan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • How to approach in getting the dates for the user given periods

    Hi All,
    I have areuirement where the calendar would be like as 466 for a period
    for eg. period 1(Jan) has 4 weeks
    period 2(Feb) has 6 weeks
    period 3(Mar) has 6 weeks
    again period 4(Apr) has 4 weeks
    period 5(May) has 6 weeks
    period 6(June) has 6 weeks
    How to get the dates (from date and end date) for the periods.
    Anybody's help will be appreciated
    Regards
    Saugata

    I have areuirement where the calendar would be like as 466 for a periodWhat does that mean? Is 466 the format of the data?
    The end date depends on the start date. This query might be helpful for you
    SQL> WITH data AS(
      2    SELECT 1 period, 4 duration FROM dual UNION ALL
      3    SELECT 2, 6 FROM dual UNION ALL
      4    SELECT 3, 6 FROM dual UNION ALL
      5    SELECT 4, 4 FROM dual UNION ALL
      6    SELECT 5, 6 FROM dual UNION ALL
      7    SELECT 6, 6 FROM dual)
      8  SELECT
      9    period,
    10    duration,
    11    SYSDATE + SUM(duration) OVER (ORDER BY period RANGE UNBOUNDED PRECEDING) * 7 AS end_date
    12  FROM data
    13  ;
        PERIOD   DURATION END_DATE
             1          4 16-NOV-07
             2          6 28-DEC-07
             3          6 08-FEB-08
             4          4 07-MAR-08
             5          6 18-APR-08
             6          6 30-MAY-08
    6 rows selected.
    SQL>

  • What is the best approach to fetch more than 10,000 rows from database ?

    Hi Experts,
    I am new to JDBC. I have a table containing more than 10,000 rows. I don't want to fetch all of them at one shot, because it takes too much time and consumes too much memory. I want to fetch 500 rows first to return to end user. When end user wants more, I fetch subsequent 500 rows until all 10,000 rows are fetched. But how to do this in JDBC? The most difficult thing to me is how to remember the fetched position so that I can fetch each 500 rows subsequently. Could someone help me on that? Sample code is better. Thank you very much.
    Regards,
    Skeeter

    When you instantiate your (prepared) statement, specify a scrollable resultset --
    PreparedStatement stat= connection.prepareStatement(query,
                                    ResultSet.TYPE_SCROLL_INSENSITIVE,
                                    ResultSet.CONCUR_READ_ONLY);
    ResultSet rs= stat.executeQuery();The following method reads 'size' rows, starting at position 'pos' --
    void read(int pos, int size) {
       if (rs.absolute(pos))
          do {
             process(rs);
          } while (rs.next() && --size > 0);
    }kind regards,
    Jos

  • Server admin not seeing directory users from workgroup manager

    I am setting up a new Xserve with Snow Leopard (get 'em while we can). We have eight other XServes running Leopard or Snow Leopard server. On those machines we have set up file sharing over AFP. The machines are connected to our Active Directory server and our users authenticate using their domain passwords. All of our other servers were setup in Leopard and were upgraded to Snow Leopard. We have not had any issues authenticating to those boxes.
    This is the first one that we have actually setup new-out-of-the-box in Snow Leopard. I can set Workgroup Manager up to connect to our AD, and can see and search my domain users and groups in Workgroup Manager. When I try to set up my File Shares in Server Admin, none of my domain users show up-only local accounts.
    What have I missed? In Leopard, when I connected to the domain, the users immediately became available in Server Admin. Not so in SL, at least on this box.
    Help?

    Hi
    The first thing to check is if you've bound the Server to the AD Domain. The second thing is if the /Active Directory/All Domains is in the Search Policy. If you don't do either of these WorkGroup Manager won't display anything coming from the AD Schema.
    In 10.6 Apple moved the Directory Utility from where it used to be in /Applications/Utilities and made it part of the Accounts Preferences Pane. Perhaps it's this change that's confusing you? I would not advise doing this but it's also possible you used the Server Setup Assistant to do most of the configuration? If you did maybe something went wrong at that stage (won't be the first time) and you need to manually bind the Server instead?
    As ever make sure this server is using the same NTP Server as the others.
    Tony

Maybe you are looking for