Type of synchronization

How to identify the type of synchronization whether it is Smart or Generic?

Hi,
open MEREP_MON in the middleware and start a sync. Do you see anything there (except MIAUTH?)
If yes, then you have smart sync, if not you either have no app installed or you ave generic sync app.
What the difference is - that is described in the first answer
Hope this helps to solve your issue!
Regards,
Oliver

Similar Messages

  • FTP for upload / synchronization

    I know this is a bit of an odd question, but here goes . . .
    I've been using a very old version of GoLive (5.00) for about a decade now, and for most of that time, I've used it only for uploading changes to website files.  As for making the changes in the first place, I've just been working with the raw HTML code.  But I've found GoLive to be very useful for the FTPing since it keeps track of the local files that have been updated.
    I'd be happy to continue on the way I have in the past, but after rebuilding my entire WinXP platform from scratch on a new HDD, I'm having trouble with the GoLive install.  I've gone around the bend a couple times trying to get it right, but then it occurred to me: I'm going through a lot of trouble trying to install a pretty good-sized piece of software that I really intend to use only for FTPing.
    So here's my question... can anyone recommend an FTP-type program that has the same type of synchronization capability that's built into GoLive's FTP feature?  If there's a good Freeware package out there, all the better.
    Thanks in advance for whatever help you can provide.
    —JM

    So... FWIW...  My initial problem was getting an error msg — you know, report this error to Microsoft dialogue box — when I exited GoLive.  This was the only problem, but I thought the reason I had it was that I installed the 5.00 UPGRADE CD fresh; i.e., *not* on top of 4.XX.  So, I uninstalled 5.00; installed 4.XX and then reinstalled the 5.00 upgrade.  This lead to more problems with a half-dozen error messages when I tried opening GoLive.  That's when I uninstalled everything and posted my original message.
    After the flood of replies to that  » ;-) « …, I figured I might as well try re-installing 5.00 fresh.  I have done so and have the original problem (error report dialogue on exit), but it's an easy one to live with until FileZilla comes out with synchronization support.
    —JM

  • Using version indicator to synchronize databases

    Hi!
    I have two databases - one production and one development - and I want a
    smarter way to keep them in synch.
    I was contemplating creating my own timestamp field in every class and
    updating it on store() but maybe using the version indicator column is
    better?
    Are there any existing synchronization tools for Kodo? Has any else found
    a good solution to this problem?
    Thanks,
    Nic.

    Nic,
    Can you describe the rules for how synchronization should happen? For
    example, is one database the live copy and another a slave that should
    be kept updated of changes made to the live copy? Or is it possible for
    concurrent changes to happen in both databases? If the latter, what
    types of synchronization error conditions are you looking for?
    -Patrick
    Nic Cottrell wrote:
    Hi!
    I have two databases - one production and one development - and I want a
    smarter way to keep them in synch.
    I was contemplating creating my own timestamp field in every class and
    updating it on store() but maybe using the version indicator column is
    better?
    Are there any existing synchronization tools for Kodo? Has any else found
    a good solution to this problem?
    Thanks,
    Nic.

  • ALUI + Windows server 2003 + WebLogic server + SSO via WIA

    Hello,
    I'm trying to configurate SSO on ALUI in my environment.
    I've got two servers.
    On server 1 is installed ALUI 6.5 on WebLogic server.
    On server 2 is installed Active Directory.
    Both servers are attached to one domain from active directory and both servers use Windows server 2003 as operation system.
    My way of implementation SSO in this environment is folowing:
    1. I install IIS on server 2.
    2. I deploy WebLogic plug-in for IIS and configurate it to proxy. This is configurated on server 2.
    3. I check that this setup works by trying to connect to the portal via IIS.(true)
    4. I configure Windows Integrated Authentication for the /portal virtual directory.
    5. I install Identity Service for AD (server 2)
    6. I configure Authentication Source and synchronize my Users from AD. After users have been synchronised into the portal, I validated setup by trying to login on the portal as one of AD users.
    7. I change portal Authentication Source type to Synchronization only and select SSO as Authentication partner in portal administration.
    8. I modify Authentication source prefix, enable SSO, and select SSO provider (for WIA SSOVendor=5) in the portalconfig.xml file
    <setting name="DefaultAuthSourcePrefix">
    <value xsi:type="xsd:string">mydomain</value>
    </setting>
    IE on a computer in the mydomain is configured to "Automatic logon with current name and password". Site points to the portal(server 1 ) via the proxy on the server 2 server is in the local intranet zone, but this configuration is on running.
    logs from weblogic plug-in (server 2):
         ================New Request: [portal/server.pt.wlforward] =================
    Wed Nov 19 09:27:24 2008 <484412270832442> SSL is not being used
    Wed Nov 19 09:27:24 2008 <484412270832442> resolveRequest: wlforward: /portal/server.pt
    Wed Nov 19 09:27:24 2008 <484412270832441> Wed Nov 19 09:27:24 2008 <484412270832442> timer thread starting
    URI is /portal/server.pt, len=17
    Wed Nov 19 09:27:24 2008 <484412270832442> Request URI = [portal/server.pt]
    Wed Nov 19 09:27:24 2008 <484412270832442> attempt #0 out of a max of 10
    Wed Nov 19 09:27:24 2008 <484412270832442> Trying a pooled connection for '192.168.130.103/80/80'
    Wed Nov 19 09:27:24 2008 <484412270832442> getPooledConn: No more connections in the pool for Host[192.168.130.103] Port[80] SecurePort[80]
    Wed Nov 19 09:27:24 2008 <484412270832442> general list: trying connect to '192.168.130.103'/80/80 at line 1239 for '/portal/server.pt'
    Wed Nov 19 09:27:24 2008 <484412270832442> INFO: New NON-SSL URL
    Wed Nov 19 09:27:24 2008 <484412270832442> Connect returns -1, and error no set to 10035, msg 'Unknown error'
    Wed Nov 19 09:27:24 2008 <484412270832442> EINPROGRESS in connect() - selecting
    Wed Nov 19 09:27:24 2008 <484412270832442> Local Port of the socket is 8610
    Wed Nov 19 09:27:24 2008 <484412270832442> Remote Host 192.168.130.103 Remote Port 80
    Wed Nov 19 09:27:24 2008 <484412270832442> general list: created a new connection to '192.168.130.103'/80 for '/portal/server.pt', Local port: 8610
    Wed Nov 19 09:27:24 2008 <484412270832442> WLS info in sendRequest: 192.168.130.103:80 recycled? 0
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from client:[Accept]=[*/*]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from client:[Accept-Encoding]=[gzip, deflate]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from client:[Accept-Language]=[cs]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from client:[Authorization]=[Negotiate TlRMTVNTUAADAAAAAAAAAEgAAAAAAAAASAAAAAAAAABIAAAAAAAAAEgAAAAAAAAASAAAAAAAAABIAAAABcKIogUCzg4AAAAP]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from client:[Cookie]=[ptLTCN=ptLTCDV; ptrecentfolders=336-; JSESSIONID=MMhLJvThCGWyn9knTvgh9dRXMzH0N70JSm8JjS7tKbjQyNvhkX21!-1901726998; plloginoccured=false; ptLastLoginAuthSource=]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from client:[Host]=[192.168.130.83]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from client:[User-Agent]=[Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022)]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from client:[UA-CPU]=[x86]
    Wed Nov 19 09:27:24 2008 <484412270832442> URL::sendHeaders(): meth='GET' file='/portal/server.pt' protocol='HTTP/1.1'
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[Accept]=[*/*]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[Accept-Encoding]=[gzip, deflate]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[Accept-Language]=[cs]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[Authorization]=[Negotiate TlRMTVNTUAADAAAAAAAAAEgAAAAAAAAASAAAAAAAAABIAAAAAAAAAEgAAAAAAAAASAAAAAAAAABIAAAABcKIogUCzg4AAAAP]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[Cookie]=[ptLTCN=ptLTCDV; ptrecentfolders=336-; JSESSIONID=MMhLJvThCGWyn9knTvgh9dRXMzH0N70JSm8JjS7tKbjQyNvhkX21!-1901726998; plloginoccured=false; ptLastLoginAuthSource=]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[Host]=[192.168.130.83]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[User-Agent]=[Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022)]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[UA-CPU]=[x86]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[Connection]=[Keep-Alive]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[WL-Proxy-Client-IP]=[192.168.130.83]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[Proxy-Client-IP]=[192.168.130.83]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[X-Forwarded-For]=[192.168.130.83]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[WL-Proxy-Client-Keysize]=[]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[WL-Proxy-Client-Secretkeysize]=[]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[X-WebLogic-KeepAliveSecs]=[30]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[X-WebLogic-Force-JVMID]=[unset]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[WL-Proxy-SSL]=[false]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[Proxy-Remote-User]=[VYVOJ\Administrator]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to WLS:[Proxy-Auth-Type]=[Negotiate]
    Wed Nov 19 09:27:24 2008 <484412270832442> URL::parseHeaders: CompleteStatusLine set to [HTTP/1.1 302 Moved Temporarily]
    Wed Nov 19 09:27:24 2008 <484412270832442> URL::parseHeaders: StatusLine set to [302 Moved Temporarily]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from WLS:[Cache-Control]=[no-cache="set-cookie"]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from WLS:[Date]=[Wed, 19 Nov 2008 08:27:24 GMT]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from WLS:[Transfer-Encoding]=[chunked]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from WLS:[Location]=[http://192.168.130.83/portal/SSOServlet?]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from WLS:[X-WebLogic-JVMID]=[1729301638]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from WLS:[Set-Cookie]=[JSESSIONID=BZGGJjNMHHySbShbTVQ018X2Tpg5Gc3cb6L28yBqnl9vKGynjLlp!1729301638; path=/]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from WLS:[Set-Cookie]=[plloginoccured=false; path=/]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs from WLS:[X-Powered-By]=[Servlet/2.5 JSP/2.1]
    Wed Nov 19 09:27:24 2008 <484412270832442> parsed all headers OK
    Wed Nov 19 09:27:24 2008 <484412270832442> sendResponse() : uref->getStatus() = '302'
    Wed Nov 19 09:27:24 2008 <484412270832442> Going to send headers to the client. Status :302 Moved Temporarily
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to client:[Cache-Control]=[no-cache="set-cookie"]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to client:[Transfer-Encoding]=[chunked]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to client:[Location]=[http://192.168.130.83/portal/SSOServlet?]
    Wed Nov 19 09:27:24 2008 <484412270832442> for 192.168.130.103/80/80, updating JVMID: 1729301638
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to client:[Set-Cookie]=[JSESSIONID=BZGGJjNMHHySbShbTVQ018X2Tpg5Gc3cb6L28yBqnl9vKGynjLlp!1729301638; path=/]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to client:[Set-Cookie]=[plloginoccured=false; path=/]
    Wed Nov 19 09:27:24 2008 <484412270832442> Hdrs to client:[X-Powered-By]=[Servlet/2.5 JSP/2.1]
    Wed Nov 19 09:27:24 2008 <484412270832442> Content Length Unknown
    Wed Nov 19 09:27:24 2008 <484412270832442> canRecycle: conn=1 status=302 isKA=1 clen=-1 isCTE=1
    Wed Nov 19 09:27:24 2008 <484412270832442> closeConn: pooling for '192.168.130.103/80'
    Wed Nov 19 09:27:24 2008 <484412270832442> request [portal/server.pt] processed successfully ..................
    Wed Nov 19 09:27:24 2008 <484412270832443>
    If I do same configuration in environment where ALUI is runnig on IIS(without cofiguration proxy and weblogic plug in) than automatic login to portal is right.
    Have you got any idea how to solve this?

    Here are the steps to enable it (from what I remember)
    1. Log into the portal, update the Auth Source, select the "Synchronization" menu item on the left, change the option to 'Synchronization with Authentication Partner' and a new dropdown will appear, select the only option 'SSO Authentication Source'.
    2. portalconfig.xml - set the SSOVendor attribute to 5 (WIA) it's currently 0 and AllowAutoConnect flag to 1 on all webservers you'd like this to happen on.
    3. IIS Administrator - browse to the portal/sso virtual directory, right click the 'sso' folder and select 'Properties'. In the Directory Security tab click the 'Edit' button in the first section for "Administration and Access". Uncheck the "Allow Anonymous Access" and check the box for "Integrated Windows Authentication" in the second section of the page, on all your webservers
    4. Restart the portal on both servers.
    There is a good deal of info on WIA on the archives from the Plumtree days
    http://forums.oracle.com/forums/search.jspa?threadID=&q=wia&objID=c234&dateRange=all&userID=&numResults=15&rankBy=10001

  • Issue with index on table

    Hi,
    We have created an index(assume z2) on table CATSDB with 2 fields. There is an other index(Z1 assume) with the same fields and the order is also same. When a report accesing the table it is taking more time to run when index Z2 is on table. But when deleted then the report ran quickly. Is it with the duplicate index created???
    Please let me know
    Regards
    Shiva

    Hi
    i am giving total index and buffering concept details by seeing this you can understand how we can achive performance through these
    <b>reward if usefull</b>
    <b>Performance during table access</b>
    <b>Indexes</b>
    Primary and secondary indexes
    Structure of an index
    Accessing tables using indexes
    <b>Table buffering</b>
    Advantages of buffering
    Concept of buffering
    Buffering types
    Buffer synchronization
    <b>Primary and secondary indexes</b>
    Index: Technical key of a database table.
    Primary index: The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database.
    Secondary index: Additional indexes could be created considering the most frequently accessed dimensions of the table.
    <b>Structure of an Index</b>
    An index can be used to speed up the selection of data records from a table.
    An index can be considered to be a copy of a database table reduced to certain fields. The data is stored in sorted form in this copy. This sorting permits fast access to the records of the table (for example using a binary search). Not all of the fields of the table are contained in the index. The index also contains a pointer from the index entry to the corresponding table entry to permit all the field contents to be read.
    When creating indexes, please note that:
    An index can only be used up to the last specified field in the selection! The fields which are specified in the WHERE clause for a large number of selections should be in the first position.
    Only those fields whose values significantly restrict the amount of data are meaningful in an index.
    When you change a data record of a table, you must adjust the index sorting. Tables whose contents are frequently changed therefore should not have too many indexes.
    Make sure that the indexes on a table are as disjunctive as possible.
    (That is they should contain as few fields in common as possible. If two indexes on a table have a large number of common fields, this could make it more difficult for the optimizer to choose the most selective index.)
    <b>Accessing tables using Indexes</b>
    The database optimizer decides which index on the table should be used by the database to access data records.
    You must distinguish between the primary index and secondary indexes of a table. The primary index contains the key fields of the table. The primary index is automatically created in the database when the table is activated. If a large table is frequently accessed such that it is not possible to apply primary index sorting, you should create secondary indexes for the table.
    The indexes on a table have a three-character index ID. '0' is reserved for the primary index. Customers can create their own indexes on SAP tables; their IDs must begin with Y or Z.
    If the index fields have key function, i.e. they already uniquely identify each record of the table, an index can be called a unique index. This ensures that there are no duplicate index fields in the database.
    When you define a secondary index in the ABAP Dictionary, you can specify whether it should be created on the database when it is activated. Some indexes only result in a gain in performance for certain database systems. You can therefore specify a list of database systems when you define an index. The index is then only created on the specified database systems when activated
    <b>Database access using Buffer concept</b>
    Buffering allows you to access data quicker by letting you
    access it from the application server instead of the database.
    <b>Advantages of buffering</b>
    Table buffering increases the performance when the records of the table are read.
    As records of a buffered table are read directly from the local buffer of the application server on which the accessing transaction is running, time required to access data is greatly reduced. The access improves by a factor of 10 to 100 depending on the structure of the table and on the exact system configuration.
    If the storage requirements in the buffer increase due to further data, the data that has not been accessed for the longest time is displaced. This displacement takes place asynchronously at certain times which are defined dynamically based on the buffer accesses. Data is only displaced if the free space in  the buffer is less than a predefined value or the quality of the access is not satisfactory at this time.
    Entering $TAB in the command field resets the table buffers on the corresponding application server. Only use this command if there are inconsistencies in the buffer. In large systems, it can take several hours to fill the buffers. The performance is considerably reduced during this time.
    <b>Concept of buffering</b>
    The R/3 System manages and synchronizes the buffers on the individual application servers. If an application program accesses data of a table, the database interfaces determines whether this data lies in the buffer of the application server. If this is the case, the data is read directly from the buffer. If the data is not in the buffer of the application server, it is read from the database and loaded into the buffer. The buffer can therefore satisfy the next access to this data.
    The buffering type determines which records of the table are loaded into the buffer of the application server when a record of the table is accessed. There are three different buffering types.
    With full buffering, all the table records are loaded into the buffer when one record of the table is accessed.
    With generic buffering, all the records whose left-justified part of the key is the same are loaded into the buffer when a table record is accessed.
    With single-record buffering, only the record that was accessed is loaded into the buffer.
    <b>Buffering types</b>
    With full buffering, the table is either completely or not at all in the buffer. When a record of the table is accessed, all the records of the table are loaded into the buffer.
    When you decide whether a table should be fully buffered, you must take the table size, the number of read accesses and the number of write accesses into consideration. The smaller the table is, the more frequently it is read and the less frequently it is written, the better it is to fully buffer the table.
    Full buffering is also advisable for tables having frequent accesses to records that do not exist. Since all the records of the table reside in the buffer, it is already clear in the buffer whether or not a record exists.
    The data records are stored in the buffer sorted by table key. When you access the data with SELECT, only fields up to the last specified key field can be used for the access. The left-justified part of the key should therefore be as large as possible for such accesses. For example, if the first key field is not defined, the entire table is scanned in the buffer. Under these circumstances, a direct access to the database could be more efficient if there is a suitable secondary index there.
    With generic buffering, all the records whose generic key fields agree with this record are loaded into the buffer when one record of the table is accessed. The generic key is a left-justified part of the primary key of the table that must be defined when the buffering type is selected. The generic key should be selected so that the generic areas are not too small, which would result in too many generic areas. If there are only a few records for each generic area, full buffering is usually preferable for the table. If you choose too large a generic key, too much data will be invalidated if there are changes to table entries, which would have a negative effect on the performance.
    A table should be generically buffered if only certain generic areas of the table are usually needed for processing.
    Client-dependent, fully buffered tables are automatically generically buffered. The client field is the generic key. It is assumed that not all of the clients are being processed at the same time on one application server. Language-dependent tables are a further example of generic buffering. The generic key includes all the key fields up to and including the language field.
    The generic areas are managed in the buffer as independent objects. The generic areas are managed analogously to fully buffered tables. You should therefore also read the information about full buffering.
    Single-record buffering is recommended particularly for large tables in which only a few records are accessed repeatedly with SELECT SINGLE. All the accesses to the table that do not use SELECT SINGLE bypass the buffer and directly access the database.
    If you access a record that was not yet buffered using SELECT SINGLE, there is a database access to load the record. If the table does not contain a record with the specified key, this record is recorded in the buffer as non-existent. This prevents a further database access if you make another access with the same key
    You only need one database access to load a table with full buffering, but you need several database accesses with single-record buffering. Full buffering is therefore generally preferable for small tables that are frequently accessed.
    <b>Synchronizing local buffers</b>
    The table buffers reside locally on each application server in the system. However, this makes it necessary for the buffer administration to transfer all changes made to buffered objects to all the application servers of the system.
    If a buffered table is modified, it is updated synchronously in the buffer of the application server from which the change was made. The buffers of the whole network, that is, the buffers of all the other application servers, are synchronized with an asynchronous procedure.
    Entries are written in a central database table (DDLOG) after each table modification that could be buffered. Each application server reads these entries at fixed time intervals.
    If entries are found that show a change to the data buffered by this server, this data is invalidated. If this data is accessed again, it is read directly from the database. In such an access, the table can then be loaded to the buffer again.

  • LDAP SSL and Secure

    I am unable to get SSL or Secure LDAP connection to work.
    These are my settings for Directory-service:
    name: TEST
    description: TEST
    login-prefix: TEST
    type: GenericLdap
    last-sync: (no value)
    last-sync-error: The server is not operational.
    users: (no value)
    groups: (no value)
    Connection settings
    host: ldap.xon-ionx.****.se
    port: 636
    top-directory: ou=USER_CONTAINER,o=ROOT
    binding-type: Secure
    synchronization-account: cn=ZAV_User,ou=external,o=ROOT
    password: ********
    Schema settings
    user-filter: (objectClass=inetOrgPerson)
    user-class: inetOrgPerson
    user-login-name: cn
    user-first-name:
    user-last-name:
    user-full-name: cn
    group-filter: (objectClass=groupOfNames)
    group-class: groupOfNames
    group-name: cn
    group-description: description
    group-members: member
    Message from server is not saying much: Not synchronized (error: The server is not operational.)
    Debug log output as follows:
    05-07-2013 08:47:09.9960 - Critical - 0x0C5C: Directory service TEST could not be completely synced. Connection settings: host ldap.xon-ionx.****.se, port 636, top ou=USER_CONTAINER,o=ROOT, user cn=ZAV_User,ou=external,o=ROOT, type Secure, ufilter (objectClass=inetOrgPerson), uclass inetOrgPerson, uuname cn, ufname , ulname , uflname cn, gfilter (objectClass=groupOfNames), gclass groupOfNames, gdescription description, gmembership member
    The server is not operational.
    at System.DirectoryServices.DirectoryEntry.Bind(Boole an throwIfFail)
    at System.DirectoryServices.DirectoryEntry.Bind()
    at System.DirectoryServices.DirectoryEntry.get_AdsObj ect()
    at System.DirectoryServices.DirectorySearcher.FindAll (Boolean findMoreThanOne)
    at System.DirectoryServices.DirectorySearcher.FindAll ()
    at Spoon.Server.Common.Data.Library.DirectoryService. _SyncNode(LibraryDataContext dc, DirectoryServiceNode dsn, Dictionary`2 dictUsers, Dictionary`2 dictGroups, Dictionary`2 dictUsersToInclude, Dictionary`2 dictGroupsToInclude, Int32& iUsersAdded, Int32& iGroupsAdded)
    at Spoon.Server.Common.Data.Library.DirectoryService. Sync()
    /Mathias

    Do other binding options function as expected (Simple, Anonymous)? I'm also working on setting up a test environment to try and reproduce this. If I find something that can help, I'll update the thread.
    The support team could open a proper ticket with Spoon about this, but it requires that you open an SR first.

  • Make asynchronous event handler as synchronous using Powershell

    Hi All,
    Using custom coding i am able to change the asynchronous event handle to synchronous.
    Can anybody let me know Is there any other way to change the same. Like Powershell script or any.
    Without deploying the solution i want to change the event handler type.
    Please advice.
    Thanks & Regards
    MD.Liakath ali

      
    Hi 
    you can use power shell to do so
    Add-PSSnapin Microsoft.SharePoint.PowerShell –erroraction SilentlyContinue 
    $web = Get-SPWeb -Identity http://...
    $list = $web.GetList($web.Url + "/Lists/" + “list name”) 
    $type = "ItemAdded" #or any other type, like ItemDeleting, ItemAdding, ItemUpdating, ... 
    $numberOfEventReceivers = $list.EventReceivers.Count 
    if ($numberOfEventReceivers -gt 0)
     for( $index = $numberOfEventReceivers -1; $index -gt -1; $index–-) {
        $receiver = $list.EventReceivers[$index] ;
        $name = $receiver.Name
        $typ = $receiver.Type ;
     if ($typ -eq $type)  #or you can check ($name -eq "event receiver's name") if you have more then one event receivers of the same type
        $receiver.Synchronization = "Synchronous"
        $receiver.Update()
        Write-Host "Event receiver " $name " is changed to Synchronous"
    else
        Write-Host " There is no EventReceiver of type " $type " registered for this list "
    $web.Dispose()
    or
    $list = (get-spweb http://sharepoint/sites/test).lists['somelist']
    $def = $list.EventReceivers.Add()
    $def.Assembly = "MyReceiverAssembly, Version=1.0.0.0, Culture=Neutral,PublicKeyToken=a00000000a000ce0"
    $def.Class = "MyReceiverAssembly.MyReceiverClass"
    $def.Type = [Microsoft.SharePoint.SPEventReceiverType]::ItemAdded
    $def.Name = "My ItemAdded Event Receiver";
    $def.Synchronization = [Microsoft.SharePoint.SPEventReceiverSynchronization]::Synchronous
    $def.Update()
    this should be done at each level where list is present.
    or you can Edit the Elements.xml file of the event receiver in(14 hive layouts/feature folder) and set the synchronization element as below..
    <Synchronization>Synchronous</Synchronization>
    Synchronous</Synchronization> https://naimmurati.wordpress.com/2012/03/22/add-modify-or-delete-list-event-receivers-with-powershell/">https://naimmurati.wordpress.com/2012/03/22/add-modify-or-delete-list-event-receivers-with-powershell/
    Regards,
    Rajendra Singh
    If a post answers your question, please click Mark As Answer on that post and Vote as Helpful
    http://sharepointundefind.wordpress.com/

  • Hotsync is preventing me from synching in Outlook Overwrites mode because of "other links" ....?!

    I'm getting this error ...  What does it mean, and what do I ned to change and how mates ...?
    The handheld folder, Contacts, has the synchronization option "Outlook Overwrites Handheld". A handheld folder with this type of synchronization option can not have other links with the synchronization option "Synchronize" because conflicts will occur. When you go to synchronize at the other PCs, you must go into the settings and correct the problem by either changing the synchronization option to "Handheld Overwrites Outlook" or "Do Nothing" or deleting the links.
    Cheers
    Stormy
    Post relates to: Treo 650 (Unlocked GSM)

    So you don't touch and it moves by itself?

  • Error in SSMA console

    Hi ,
    I am getting the following vague error when i am executing the script using SSMA console application
    SSMAforOracleConsole.exe –s "C:\Users\Arup\Desktop\SQL Server\Console Application\ConversionAndDataMigrationSample_New.xml"
    FATALERR invalid argument used.
    Any help is highly appreciated

    Hi Lydia,
    Thanks for your Reply.Pfb the code i am using for my ssma migration.
    <?xml version="1.0" encoding="utf-8"?>
    <!--
    Script file for SSMA-v4.2 Console for Oracle.
    Commands execution order - from top to bottom.
    Command Processor distinguishes each command by element name.
    The element name is invariable! Never modify it!
    Use this file name as the parameter to SSMA-v4.2 Console for Oracle with mandatory
    option -s[cript]. See the documentation for SSMA-v4.2 Console for more information.
    -->
    <ssma-script-file xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="C:\Microsoft SQL Server Migration Assistant for Oracle\Schemas\O2SSConsoleScriptSchema.xsd">
    <!-- Variable values should mandatorily start and end with "$".
    These values can be defined in a separate variable value file
    (See :VariableValueFileSample.xml)
    ********** Set the variable values used by this sample **********
    ********** file in the corresponding Variables Value File **********
    -->
    <!-- The escape character for “$” is “$$”.If the value of a static value of a parameter begins with “$”,
    then "$$" must be specified to treat it as a static value instead of a variable. -->
    <!-- Optional section with console configuration options-->
    <config>
    <output-providers>
    <!-- Command specific messages do not appear on console if value set to "true".
    Attributes: destination="file" (optional)
    file-name="<file-path>" (optional along with destination attribute)
    By default destination="stdout" and suppress-messages="false" -->
    <output-window suppress-messages="false"
    destination="file"/>
    <!-- Enables upgradation of project created from earlier version of SSMA to current version.
    Available action attribute values
    • yes - upgrades the project (default)
    • no - displays an error and halts the execution
    • ask-user - prompts user for input (yes or no). -->
    <upgrade-project action="yes"/>
    <!-- Enables upgradation of project created from earlier version of SSMA to current version.
    Available action attribute values
    • yes - upgrades the project (default)
    • no - displays an error and halts the execution
    • ask-user - prompts user for input (yes or no). -->
    <upgrade-project action="yes"/>
    <!--Enables creation of database during connection. By default, mode is error..
    Available mode values
    • ask-user - prompts user for input (yes or no).
    • error - console displays error and halts the execution.
    • continue - console continues the execution.-->
    <!--<user-input-popup mode="continue" />-->
    <!-- Data migration connection parameters
    Specifies which source or target server to be considered for data migration
    Attributes: source-use-last-used="true" (default) or source-server="source_servername"
    target-use-last-used="true" (default) or target-server="target_servername" -->
    <data-migration-connection source-use-last-used="false"
    target-server="target_1"/>
    <!-- Progress Reporting. By default progress reporting is disabled.
    report-progress attribute values
    • off
    • every-1%
    • every-2%
    • every-5%
    • every-10%
    • every-20% -->
    <progress-reporting enable="false"
    report-messages="false"
    report-progress="off"/>
    <!-- Reconnect manager -->
    <!-- Reconnection parameter settings incase of connection failure
    Available reconnection modes
    • reconnect-to-last-used-server - If the connection is not alive it tries to reconnect to last used server
    • generate-an-error - If the connection is not alive it throws an error(default)
    <reconnect-manager on-source-reconnect="reconnect-to-last-used-server"
    on-target-reconnect="generate-an-error"/>-->
    <!-- Prerequisites display options.
    If strict-mode is true, an exception is thrown in case of prerequisite failures-->
    <!--<prerequisites strict-mode="true"/>-->
    <!-- Object overwrite during conversion.By default, action is overwrite
    Available action values
    • error
    • overwrite
    • skip
    • ask-user -->
    <object-overwrite action="skip" />
    <!-- Sets log verbosity level. By default log verbosity level is "error"
    Available logger level options:
    • fatal-error - Only fatal-error messages are logged
    • error - Only error and fatal-error messages are logged
    • warning - All levels except debug and info messages are logged
    • info - All levels except debug messages are logged
    • debug - All levels of messages logged
    Note: Mandatory messages are logged at any level-->
    <!--<log-verbosity level="error"/>-->
    <!-- Override encrypted password in protected storage with script file password
    Default option: "false" - Order of search: 1) Protected storage 2) Script File / Server Connection File 3) Prompt User
    "true" - Order of search: 1) Script File / Server Connection File 2) Prompt User -->
    <!--<encrypted-password override="true"/>-->
    </output-providers>
    </config>
    <!-- Optional section with server definitions -->
    <!-- Note: Server definitions can be declared in a separate file
    or can be embedded as part of script file in the servers section (below)-->
    <servers>
    <sql-server name="target_1">
    <windows-authentication>
    <server value="ARUP-MANISHA\SQLEXPRESS"/>
    <database value="AdventureWorks2012"/>
    <encrypt value="true"/>
    <trust-server-certificate value="true"/>
    </windows-authentication>
    </sql-server>
    <oracle name="source_1">
    <standard-mode>
    <connection-provider value ="OracleClient"/>
    <host value="arup-manisha" />
    <port value="1521" />
    <instance value="XE" />
    <user-id value="arup" />
    <password value="arup"/>
    </standard-mode>
    </oracle>
    <script-commands>
    <!--Create a new project.
    • Customize the new project created with project-folder and project-name attributes.
    • overwrite-if-exists attribute can take values "true/false" with default as "false".
    • project-type (optional attribute) can take values
    sql-server-2005 - Creates SSMA 2005 project
    sql-server-2008 - Creates SSMA 2008 project (default)
    sql-server-2012 - Creates SSMA 2012 project
    sql-server-2014 - Creates SSMA 2014 project
    sql-azure - Creates SSMA Azure project -->
    <create-new-project project-folder="C:\Users\Arup\Documents\SSMAProjects"
    project-name="SqlMigration5"
    overwrite-if-exists="true"
    project-type="sql-server-2012"
    />
    <!-- Connect to source database -->
    <!-- • Server(id) needs to mandatorily be defined in the servers section of the
    script file or in the Servers Connection File-->
    <connect-source-database server="source_1" />
    <!-- Connect to target database -->
    <!-- • Server(id) needs to mandatorily be defined in the servers section of the
    script file or in the Servers Connection File-->
    <connect-target-database server="target_1" />
    <!--Schema Mapping-->
    <!-- • source-schema specifies the source schema we intend to migrate.
    • sql-server-schema specifies the target schema where we want it to be migrated.-->
    <map-schema source-schema="arup"
    sql-server-schema="arup.dbo" />
    <!--Refresh from database-->
    <!-- Refreshes the source database
    • object-name specifies the object(s) considered for refresh .
    (can have indivdual object names or a group object name)
    • object-type specifies the type of the object specified in the object-name attribute.
    (if object category is specified then object type will be "category")
    • on-error specifies whether to specify refresh errors as warnings or error.
    Available options for on-error:
    •report-total-as-warning
    •report-each-as-warning
    •fail-script
    • report-errors-to specifies location of error report for the refresh operation (optional attribute)
    if only folder path is given, then file by name SourceDBRefreshReport.XML is created -->
    <!-- Example1: Refresh entire Schema (with all attributes)-->
    <!--<refresh-from-database object-name="$OracleSchemaName$"
    object-type ="Schemas"
    on-error="fail-script"
    report-errors-to="$RefreshDBFolder$" /> -->
    <!-- Example2: Refresh a particular category say a procedure (other convention of the command with only mandatory attributes)-->
    <!--<refresh-from-database>
    <metabase-object object-name="$OracleSchemaName$.Testproc" object-type="Procedures"/>
    </refresh-from-database>-->
    <!-- Convert schema -->
    <!-- • object-name specifies the object(s) considered for conversion .
    (can have indivdual object names or a group object name)
    • object-type specifies the type of the object specified in the object-name attribute.
    (if object category is specified then object type will be "category")
    • conversion-report-folder specifies folder where the conversion report can to be stored.(optional attribute)
    • conversion-report-overwrite specifies whether to overwrite the conversion report folder if it already exists.
    Default value: false. (optional attribute)
    • write-summary-report-to specifies the path where the summary report will be generated.
    If only the folder path is mentioned,then file by name SchemaConversionReport.XML is created. (optional attribute)
    • Summary report creation has 2 further sub-categories
    • report-errors (="true/false", with default as "false" (optional attributes))
    • verbose (="true/false", with default as "true/false" (optional attributes))
    -->
    <!-- Example1: Convert entire Schema (with all attributes)-->
    <convert-schema object-name="$OracleSchemaName$"
    object-type="Schemas"
    write-summary-report-to="$SummaryReports$"
    verbose="true"
    report-errors="true"
    conversion-report-folder="$ConvertARReportsFolder$"
    conversion-report-overwrite="true" />
    <!-- Example2: Convert entire Schema (only with mandatory attributes)-->
    <!--<convert-schema object-name="$OracleSchemaName$"
    object-type="Schemas" />-->
    <!-- alternate convention for ConvertSchema command-->
    <!-- Example3: Convert a specific category(say Tables)-->
    <!--<convert-schema>
    <metabase-object object-name="$OracleSchemaName$.Tables"
    object-type="category" />
    </convert-schema>-->
    <!-- Example4: Convert Schema for a specific object(say Table)
    (with only a few optional attributes & write-summary-report-to with a file name)-->
    <!--<convert-schema object-name="$OracleSchemaName$.TestTbl"
    object-type="Tables"
    write-summary-report-to="$SummaryReports$\ConvertSchemaReport1.xml"
    report-errors="true"
    />-->
    <!-- Synchronize target -->
    <!-- • object-name specifies the object(s) considered for synchronization.
    (can have indivdual object names or a group object name)
    • object-type specifies the type of the object specified in the object-name attribute.
    (if object category is specified then object type will be "category")
    • on-error specifies whether to specify synchronization errors as warnings or error.
    Available options for on-error:
    •report-total-as-warning
    •report-each-as-warning
    •fail-script
    • report-errors-to specifies location of error report for the synchronization operation (optional attribute)
    if only folder path is given, then file by name TargetSynchronizationReport.XML is created.
    -->
    <!-- Example1: Synchronize target entire schema of Database with all attributes-->
    <synchronize-target object-name="arup.dbo"
    on-error="fail-script"
    report-errors-to="$SynchronizationReports$" />
    <!-- Example2: Synchronizing a particular category (say Procedures) of the schema alone -->
    <!--<synchronize-target object-name="$SQLServerDb$.dbo.Procedures"
    object-type="category" />-->
    <!--(alternative convention for Synchronize target command)-->
    <!-- Example3: Synchronization target of individual objects -->
    <!--<synchronize-target>
    <metabase-object object-name="$SQLServerDb$.dbo.TestTbl"
    object-type="Tables" />
    </synchronize-target>-->
    <!-- Example4: Synchronization of individual objects with no object-type attribute-->
    <!--<synchronize-target>
    <metabase-object object-name="$SQLServerDb$.dbo.TestTbl" />
    </synchronize-target>-->
    <!-- Save As Script-->
    <!-- Used to save the Scripts of the objects to a file mentioned
    when metabase=target ,this is an alternative to synchronization command where in we get the scripts and execute the same on the target database.
    • object-name specifies the object(s) whose scripts are to be saved .
    (can have indivdual object names or a group object name)
    • object-type specifies the type of the object specified in the object-name attribute.
    (if object category is specified then object type will be "category")
    • destination specifies the path or the folder where the script has to be saved, if the file name is not given then a file name in the format (object_name attribute value).out
    • metabase specifies whether it ithe source or target metabase.
    • overwrite if true then it overwrites if same filename exist , It can values (true/false) -->
    <!-- Example1 : Save as script from source metabase-->
    <!-- <save-as-script destination="$SaveScriptFolder$\Script1.sql"
    metabase="source"
    object-name="$OracleSchemaName$"
    object-type="Schemas"
    overwrite="true" />-->
    <!-- Example2 : Save as script from target metabase-->
    <!-- <save-as-script metabase="target" destination="$SaveScriptFolder$\Script2.sql" >
    <metabase-object object-name="$SQLServerDb$" object-type ="Databases"/>
    </save-as-script> -->
    <!-- Data Migration-->
    <!-- • object-name specifies the object(s) considered for data migration .
    (can have indivdual object names or a group object name)
    • object-type specifies the type of the object specified in the object-name attribute.
    (if object category is specified then object type will be "category")
    • write-summary-report-to specifies the path where the summary report will be generated.
    If only the folder path is mentioned,then file by name DataMigrationReport.XML is created. (optional attribute)
    • Summary report creation has 2 further sub-categories
    • report-errors (="true/false", with default as "false" (optional attributes))
    • verbose (="true/false", with default as "false" (optional attributes))
    -->
    <!--Example1: Data Migration of all tables in the schema (with all attributes)-->
    <migrate-data object-name="arup.Tables"
    object-type="category"
    write-summary-report-to="$SummaryReports$"
    report-errors="true"
    verbose="true" />
    <!--alternative convention for Data Migration Command-->
    <!--Example2: Data Migration of specific tables with no object-type attribute & write-summary-report-to with a file name -->
    <!--<migrate-data write-summary-report-to="$SummaryReports$\datamigreport.xml"
    verbose="true">
    <metabase-object object-name="$OracleSchemaName$.TestTbl" />
    </migrate-data>-->
    <!-- Save project -->
    <save-project />
    <!-- Close project -->
    <close-project />
    </script-commands>
    </ssma-script-file>

  • A few questions on BDB replication

    I have a few questions on replication and will appreciate any help that I can get:
    1. In standby mode are there any issues if the existing DB files are explicitly not opened. In this scenario the standby DB host went down and the BDB application was brought up , the environment was opened with the recovery option but the DB files were not opened.
    2. What happens if a standby appliciation goes down while the synchronization is in progress i.e. the STARTUPDONE event has not been received - will the subsequent Database recovery complete (after the application has been reinitiated) ? Are there ay APIs to check if the DB is in a consistent usable state?
    3. How are the user created log entries (created by log_put) handled at the standby DB. If we use the base replication API(s) is there ay way to trap and extract the log entry before/after the rep_process_message call.
    Thanks for your help.

    Hello,
    Here are some answers to your questions.
    1. BDB does not care whether or not the application has any database files opened.
    When the standby applies transactions to a database it opens up anything it
    needs internally.
    2. There are two types of synchronization. The first is when a replica was down, and it
    is now simply a bit behind the master when it comes back up. In that situation, it is simply
    catching up to the master. If it were to crash during that time, it would again catch up to
    the master when it rebooted. The second is internal initialization where we need to copy
    over the databases, logs and run a recovery on them (internally of course). If the replica
    were to crash during this operation, the partial databases/logs that exist on reboot will
    be cleaned up automatically and the initialization would restart when communication was
    re-established.
    3. When a replica receive a log record (any log record, user-created or BDB created),
    it simply writes it into the log. Only when the replica receives a txn_commit does the
    replica call the recovery functions to apply the log records on the replica. That would be
    the time when the function for an app-specific log record would be called.
    There is no support for apps to crack open the replication messages.
    If you're using the Base API you are in control of the communication already though. If
    the master needs to send something to the clients, the application could have a different
    type of message that is app-specific and doesn't involve BDB, rep_process_message at
    all. Is that what you're trying to accomplish?
    Sue LoVerso
    Oracle

  • Mountain lion and iphone upload

    Since upgrading to Mountain Lion, my daughter has seen two weekly uploads to from her iphone that have killed her data plan.  The last upload was 128 MB.  These take place weekly at the time of the normal iPhone upload to AT&T.  Is there some type of synchronization that is going on with the Cloud?  If so, what can we turn off in the Cloud Services?  She does not seem to use it except for a few texts.  We are trying to isolate what is going on and can see the early Saturday morning upload on the AT&T account.  thank you

    In setting up her iPhone:
    Settings-General-Diagnostics and Usage. Select Don't Send.
    If that doesn't straighten thigs out complain to AT&amp;T. This has nothing to do with Mountain Lion.

  • Nested synchronized blocks on multiple objects without deadlock

    Is it possible? Here is an example problem that exhibits deadlock:
    public class SomeObject {
         public static class Foo {
              private int n = 0;
              public synchronized void p() {
                   n++;
                   if (n % 100 == 0)
                        System.out.print("f");
         public static class Bar {
              private int n = 0;
              public void p() {
                   n++;
                   if (n % 100 == 0)
                        System.out.print("b");
         public void doSomething1(Foo f, Bar b) {
              synchronized(f) {
                   synchronized(b) {
                        f.p();
                        b.p();
         public void doSomething2(Foo f, Bar b) {
              synchronized(b) {
                   synchronized(f) {
                        b.p();
                        f.p();
         public static void main(String[] args) {
              SomeObject so = new SomeObject();
              Foo f = new Foo();
              Bar b = new Bar();
              for (int i = 0; i < 100; i++)
                   (new MyThread(so, f, b)).start();
         public static class MyThread extends Thread {
              SomeObject so = null;
              Foo f = null;
              Bar b = null;
              public MyThread(SomeObject so, Foo f, Bar b) {
                   this.so = so;
                   this.f = f;
                   this.b = b;
              public void run() {
                   while (true) {
                        if (Math.random() > 0.5)
                             so.doSomething1(f, b);
                        else
                             so.doSomething2(f, b);
    }

    Well, playing with sinchronized code as to be done carefully.
    This is very important because de JVM does not break the deadlocks (like by throwing an exception in a deadlocked thread). so in the above example, once 2 of the threads do deadlock, the rest of the threads that are created in the main method loop will deadlock too.
    For that it is better to:
    a) use minimal sinchronized code, like in Foo.p, the method is sinchronized to have access to an internal variable.
    b) use layered sinchronization. If several components in a system have synchronized code (methods) or are used as monitors in synchronized blocks, then it is better to layer the monitor usage nesting and to eliminate circular dependencies (also it is good to minimize dependencies). This is similar to code dependency in which package circular dependency should be avoided. Though in this case it is critical.
    In the example given, the doSomething* methods generate circular dependency of monitors.
    If the desing is to have a 'transaction' in calling Foo.p() and Bar.p() from the doSomething* methods, then the usage of a 'grand-daddy' central lock would be a simple solution, like changing the doSomething* methods to:
    static public synchornized void doSomething*(Foo f, Bar b) {
    f.p();
    b.p();
    or
    public void doSomething*(Foo f, Bar b) {
    synchornized (SomeObject.class) {
    f.p();
    b.p();
    If there is no need of a 'transaction' in the calling of Foo.p() and Bar.p(), then the doSomething* methods should not have any type of synchronization, and the methods Foo.p() and Bar.p() should both be synchronized.
    Also a more complex scheme could be deviced in order to lower the contention on the SomeObject class monitor. Though for the example given, as it exists only one instance of both Foo and Bar, it is not needed.
    llongeri

  • Performance Improvement - Buffering

    Hi ,
    To improve the performance of the ABAP Program we can buffer database table.
    my Question is: which table can be buffered , Master databse table or Transactional database table?
    Regards,
    Archana.

    Hi
    Table buffering
    Advantages of buffering
    Concept of buffering
    Buffering types
    Buffer synchronization
    <b>Database access using Buffer concept</b>
    Buffering allows you to access data quicker by letting you
    access it from the application server instead of the database.
    <b>Advantages of buffering</b>
    Table buffering increases the performance when the records of the table are read.
    As records of a buffered table are read directly from the local buffer of the application server on which the accessing transaction is running, time required to access data is greatly reduced. The access improves by a factor of 10 to 100 depending on the structure of the table and on the exact system configuration.
    If the storage requirements in the buffer increase due to further data, the data that has not been accessed for the longest time is displaced. This displacement takes place asynchronously at certain times which are defined dynamically based on the buffer accesses. Data is only displaced if the free space in  the buffer is less than a predefined value or the quality of the access is not satisfactory at this time.
    Entering $TAB in the command field resets the table buffers on the corresponding application server. Only use this command if there are inconsistencies in the buffer. In large systems, it can take several hours to fill the buffers. The performance is considerably reduced during this time.
    <b>Concept of buffering</b>
    The R/3 System manages and synchronizes the buffers on the individual application servers. If an application program accesses data of a table, the database interfaces determines whether this data lies in the buffer of the application server. If this is the case, the data is read directly from the buffer. If the data is not in the buffer of the application server, it is read from the database and loaded into the buffer. The buffer can therefore satisfy the next access to this data.
    The buffering type determines which records of the table are loaded into the buffer of the application server when a record of the table is accessed. There are three different buffering types.
    With full buffering, all the table records are loaded into the buffer when one record of the table is accessed.
    With generic buffering, all the records whose left-justified part of the key is the same are loaded into the buffer when a table record is accessed.
    With single-record buffering, only the record that was accessed is loaded into the buffer.
    <b>Buffering types</b>
    With full buffering, the table is either completely or not at all in the buffer. When a record of the table is accessed, all the records of the table are loaded into the buffer.
    When you decide whether a table should be fully buffered, you must take the table size, the number of read accesses and the number of write accesses into consideration. The smaller the table is, the more frequently it is read and the less frequently it is written, the better it is to fully buffer the table.
    Full buffering is also advisable for tables having frequent accesses to records that do not exist. Since all the records of the table reside in the buffer, it is already clear in the buffer whether or not a record exists.
    The data records are stored in the buffer sorted by table key. When you access the data with SELECT, only fields up to the last specified key field can be used for the access. The left-justified part of the key should therefore be as large as possible for such accesses. For example, if the first key field is not defined, the entire table is scanned in the buffer. Under these circumstances, a direct access to the database could be more efficient if there is a suitable secondary index there.
    With generic buffering, all the records whose generic key fields agree with this record are loaded into the buffer when one record of the table is accessed. The generic key is a left-justified part of the primary key of the table that must be defined when the buffering type is selected. The generic key should be selected so that the generic areas are not too small, which would result in too many generic areas. If there are only a few records for each generic area, full buffering is usually preferable for the table. If you choose too large a generic key, too much data will be invalidated if there are changes to table entries, which would have a negative effect on the performance.
    A table should be generically buffered if only certain generic areas of the table are usually needed for processing.
    Client-dependent, fully buffered tables are automatically generically buffered. The client field is the generic key. It is assumed that not all of the clients are being processed at the same time on one application server. Language-dependent tables are a further example of generic buffering. The generic key includes all the key fields up to and including the language field.
    The generic areas are managed in the buffer as independent objects. The generic areas are managed analogously to fully buffered tables. You should therefore also read the information about full buffering.
    Single-record buffering is recommended particularly for large tables in which only a few records are accessed repeatedly with SELECT SINGLE. All the accesses to the table that do not use SELECT SINGLE bypass the buffer and directly access the database.
    If you access a record that was not yet buffered using SELECT SINGLE, there is a database access to load the record. If the table does not contain a record with the specified key, this record is recorded in the buffer as non-existent. This prevents a further database access if you make another access with the same key
    You only need one database access to load a table with full buffering, but you need several database accesses with single-record buffering. Full buffering is therefore generally preferable for small tables that are frequently accessed.
    <b>Synchronizing local buffers</b>
    The table buffers reside locally on each application server in the system. However, this makes it necessary for the buffer administration to transfer all changes made to buffered objects to all the application servers of the system.
    If a buffered table is modified, it is updated synchronously in the buffer of the application server from which the change was made. The buffers of the whole network, that is, the buffers of all the other application servers, are synchronized with an asynchronous procedure.
    Entries are written in a central database table (DDLOG) after each table modification that could be buffered. Each application server reads these entries at fixed time intervals.
    If entries are found that show a change to the data buffered by this server, this data is invalidated. If this data is accessed again, it is read directly from the database. In such an access, the table can then be loaded to the buffer again.
    <b>Using buffered tables improves the performance considerably.</b>
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    <b>Reward if usefull</b>

  • Which one of these options are better for CDS updation?

    Hi,
    I was going through the data flow in the outbound direction in NW mobile.
    the data object instances in the CDS are updated by 3 means:
    1)  DOE triggered
    2) backend triggered
    3) backend pushes instance to DOE.
    which one of these is better.
    kindly explain the reason as well.

    Hi,
    Actually there are only two types of Synchronization types between Backend & DOE,
    1. DOE Triggered
    2. Backend Triggered
    DOE triggered adapters will pull the data from Backend to DOE where as Backend triggered adapters will push the data from Backend to DOE (there are two types in BE triggered, Key push & Instance push).
    Backend triggered sync type is always better since the changes have been pushed to the DOE then & there (this can be achieved by event,etc..). So that always DOE & Backend will be consistent & the receivers (devices) will also get the latest data in order to proceed with their day to day activities (i.e. business critical data).
    Regards,
    Ananth.

  • Setting parameters for synching pci-6534 cards via RTSI bus

    I have been performing high-speed,
    buffered, looping output with one pci-6534 card.  I am now adding
    a second 6534 card that I need to sync to the first card via the RTSI
    bus.  I have successfully used the RTSI bus to see the master REQ1
    and ACK1 signals on those channels of the slave (seen at a connector
    block), using the "RTSI control" vi.  I simply set the master and
    slave as transmitter and receiver, respectively, over the RTSI bus.
    The question is: Once I have used the RTSI control vi to share the
    necessary signals, do I need to do anything in my "dio config," "dio
    write," or "dio start" vi's in the looping output code for the 2nd 6534
    card to let it know that its REQ, ACK, STPTRG, and CLK signals are
    coming from the bus?  For example, in the buffered pattern looping
    output vi, the "dio start" vi has choices of "internal" or "RTSI
    connection" for its clock.  My master board's code simply uses the
    internal.  Does my slave need to be set to RTSI connection, or,
    once I have shared the clock signal over the RTSI bus, is that
    effectively the internal clock for my slave too?
    I apologize it this question is confusing.  Unfortunately, so is the issue.

    Hello bwz,
    When you are performing synchronization across the RTSI bus you need to specify that the slave device should get its clock signals from there.  You would use the digital clock config VI to do this.  If you look in the example finder, you will find synchronization example VIs that do the same kind of thing for analog input.  To find the examples, open the example finder by going to Help >> Find Examples >> Hardware Input and Output >> Traditional DAQ >> Multiple Device. 
    If you are just getting started developing your application, you may want to consider using DAQmx.  There are many more examples available to look at for this type of synchronization.  To find these examples in the example finder go to Hardware Input and Output >> DAQmx >> Synchronization >> Multi-Device.  To use your PCI-6534 with NI-DAQmx, you must have version 7.4 or later.  The newest version is DAQmx 7.5.  You may also want to look at this tutorial about synchronization with DAQmx. 
    I hope this helps!
    Laura

Maybe you are looking for