Servlet attribute synchronization

Hello,
I have servlet and class responsible for database access. I create this class during context initialization and bind as context attribute. Later my servlets access it to operate on database. My question is: do I need synchronize something in these database access classes? For example, if I define some field (say catalogName) which stores database table name and when servlets access class they change value of this field to access different tables, do I need to synchronize this field? Because if two users access the class at the same moment they both might want to change the value of this field. Hope I made my question clear.
Andrius

Short answer: Yes.
Long answer: You should rethink your design if you store such information in an Instance held in the ServletContext. A better approach is to just store a connection pool in the ServletContext and let each servlet retrieve a connection in a thread safe manner from the pool and then put it back into the pool after processing. This way you reduce thread contention and improve performance as well as the quality of your desgn.
regards
robert

Similar Messages

  • How to access servlet objects from OA page controller class

    Hi everybody!
    I need to put some value into servlet attribute in OA page controller class to read it from ordinary servlet later.
    How can i do it? Is it possible to get HttpServletRequest and HttpServletResponse objects from page controller?
    Thank you.

    I have a servlet which receives uploaded files with special attributes (something like tags for file) using POST request.
    This attributes created when user open page in standard OAF page via page controller.
    On client side I have an applet which uploads user selected file to my servlet and passes this file attributes.
    Now this attributes passes as plain text. I want to encrypt this attributes to hide attribute details from user. To do this I need to share some information between OAF page and my servlet.
    I know that OAF supports URL encryption, but to decrypt it I should use standard pageContext object.
    But in ordinary servlet I can't use it.

  • Basic questions about Tomcat

    Hi all, I'm new to Tomcat and I have a few questions...
    1) I have a HttpServlet class in my server. Will an instance of this class be created for each request?. For performance reasons, can I specify a number of instances to be pre-created before user requests? (creating them at user request may be too slow)
    2) I'd like to know more about threads in Servlet Containers. If my HttoServlet class has an instance of a class, will the attributes of this class be thread-safe? How does this work?
    thank you in advance

    1) I have a HttpServlet class in my server. Will an
    instance of this class be created for each request?.No. There is one instance of each servlet object.
    For performance reasons, can I specify a number of
    instances to be pre-created before user requests?
    (creating them at user request may be too slow)On my laptop creating a simple object takes 0.00000001 seconds. Object creation is fast in Java; don't worry about it (unless you really are trying to create a hundred million objects per second and profiling shows that's the bottleneck in your application.)
    2) I'd like to know more about threads in Servlet
    Containers. If my HttoServlet class has an instance
    of a class, will the attributes of this class be
    thread-safe? How does this work?Don't put fields in servlets. Synchronize access to any data that are shared between threads.

  • Multipe threads access: best way to handle

    Hi,
    I understand that we have issues when multiple threads access the instance variables in a servlet. Do we have the same issue with the local variables? I have a doGet method that has two lines of code.
    line 1: read request parameters and create a data object.
    line 2: make a call to facade to update this object in database.
    What options I have to make sure that there are no concurrent access issues.
    -Deepak

    No, you don't have the same issue with local variables, new ones are created for each thread. You should either avoid adding attributes to a servlet, or synchronize access to them if they are objects (there are no issues with primitive values).

  • WebServer 6.1 SP3 SSL reverse proxy to Sun One Application Server 7

    I have an application in the appserver7 that requires SSL authentication. I have already installed a self cert in the appserver7, and the authentication works fine when I browse directly to the appserver.
    The appserver7 has both listener for port 80 and 443 enabled.
    I'm currently setting up a webserver (WebServer 6.1 SP3) to act as a reverse proxy to the appserver7. The reverse proxy for the basic jsp pages found in the appserver worked fine.
    When I try to access the login page, in the appserver, in ssl mode, I am unable to do so. I then try changing the obj.conf to the following, from http to https:
    <Object name="passthrough">
    ObjectType fn="force-type" type="magnus-internal/passthrough"
    Service fn="service-passthrough" method="(GET|HEAD|POST)" servers="https://172.2
    8.48.53"
    However, it still doesn't work.
    Do I need to install a self cert in the webserver and enable the ssl listener as well?
    Do I need to install any reverse proxy addon for the appserver? Any
    setup for the obj.conf in the appserver?
    Any ideas how to get this done?
    Thanks.
    Mac.

    The Web Server 6.1 SP3 Reverse Proxy Plugin is supported, but it sounds like you're trying to do something that simply isn't possible.
    If you want the Reverse Proxy Plugin to perform SSL mutual authentication with the Application Server using the client's certificate, that's impossible due to the nature of SSL mutual authentication. If the plugin could impersonate the client, then SSL would be vulnerable to MITM (Man In The Middle Attacks). Fortunately, SSL isn't vulnerable to such attacks because the plugin doesn't know the client's private key.
    If you simply want the Reverse Proxy Plugin to pass information about the client's certificate along to the Application Server, that hapens automatically. There's nothing special to configure. Note that the plugin will not authenticate to the Application Server in this case. Rather, it will simply copy the X.509 certificate into the proprietary Proxy-auth-cert: HTTP request header.
    The application running on the Application Server can inspect the Proxy-auth-cert: header using standard Servlet APIs. Alternatively, you can use Application Server 7's auth-passthrough AuthTrans SAF to cause the contents of the Proxy-auth-cert: header to be copied to the javax.servlet.request.X509Certificate Servlet attribute.

  • Does anybody have a solution for the NAT problem?

    Is somebody's application or Applet able to play any RTP stream behind a NAT Router? Can anybody establish any kind of connection / broadcasting between two subnets? I've got my RTP-Transmitter@public IP (using RTPManager...SendStream.start()), and I try to receive the stream from my local network which is behind a router (DHCP: 192.168....).
    I read forums, newsgroups, looked for any solution for days all over the web but I've found nothing. Zero.
    What's the secret? Any hints?
    Best regards from Munich / Germany,
    r.v.

    Hi
    I have the same problem.
    I have one appletTransmitter that capture video from webcam and transmit it to other client on internet.
    I try to transmit medialocator from appletTransmitter to servlet1 and then save MedialLocator as servlet attribute, then other client can connect to servlet2 that send saved MediaLocator to appletClient.
    APPLETTRANSMITTER:
    URL url=null;
    MediaLocator media=new MediaLocator("vfw://0");
    try{
    url = new URL("http://localhost:8080/servlet1");
    catch(MalformedURLException mue){mue.printStackTrace();}
    URLConnection conn=null;
    try{
    conn = url.openConnection();
    catch(IOException ioe){ioe.printStackTrace();}
    conn.setDoOutput(true);
    OutputStream os=null;
    ObjectOutputStream oos=null;
    InputStream in=null;
    ObjectInputStream iin=null;
    MediaLocator mResp=null;
    String r=null;
    try{
    os=conn.getOutputStream();
    oos=new ObjectOutputStream(os);
    oos.writeObject(media);
    //oos.writeObject("Prova Servlet");
    oos.flush();
    catch(IOException io){io.printStackTrace();}
    catch(ClassNotFoundException cn){cn.printStackTrace();}
    SERVLET1
    ObjectInputStream objin = new ObjectInputStream(request.getInputStream());
    MediaLocator ml =null;
    try{
    ml = (MediaLocator) objin.readObject();
    context.setAttribute("media",ml);
    catch(ClassNotFoundException e)
    {e.printStackTrace()}
    But on servlet1 there is a ClassNotFoundException: MediaLocator
    What do we think about the solution and exception problem?
    Best Regards,
    Nico from Italy

  • Sys.fn_xe_file_target_read_file performance

    Hi,
    I am working with extended events and the function sys.fn_xe_file_target_read_file. I am using the file_name and file_offset to get new events, since the last time I queried the function. I am checking for new events once a minute like this:
    exec sp_executesql N'SELECT @@SERVERNAME AS server_name, @session_name AS session_name, @collection_id AS collection_id, module_guid, package_guid, [object_name], event_data, [file_name], file_offset FROM sys.fn_xe_file_target_read_file(@file_path, NULL,
    @file_name, @file_offset)',N'@session_name nvarchar(13),@collection_id bigint,@file_path nvarchar(84),@file_name nvarchar(104),@file_offset bigint',@session_name=N'system_health',@collection_id=4127175,@file_path=N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\LOG\system_health*.xel',@file_name=N'C:\Program
    Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\LOG\system_health_0_130317495005090000.xel',@file_offset=66560
    I have started to get a problem on some servers that it is very slow, even when there are few new events. It can take up to 30 seconds to get one single event.
    After some investigations I found out that it seems to be related to the number of files. I had configured the extended event target to 100 files of the size 5 MB. On the server where we have the problem there were 41 files. When I deleted all files but
    the latest, then the time went down to about 1 second. After stopping and starting the session and then deleting the old file (keeping only one small file), the time went down to about 50 ms.
    So I am trying to figure out how sys.fn_xe_file_target_read_file works (what is it doing), and the pros and cons of having many files or large files. I will do some testing, but if anyone has any experiences to share it would be great.
    The version number is 11.0.3373.0.
    Best regards
    Ola Hallengren
    http://ola.hallengren.com

    I was doing some testing with Process Monitor. I can see that it is accessing all files. Here is the output from this
    query:
    SELECT *
    FROM sys.fn_xe_file_target_read_file('C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health*.xel',
    NULL,
    'C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130317438925230000.xel',
    2050560
    "Time of Day","Process Name","PID","Operation","Path","Result","Detail"
    "21:51:06.3127313","sqlservr.exe","1784","CreateFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health*_0_9223372036854775807.xel","NAME INVALID","Desired Access: Read Attributes, Synchronize, Dis, Options: Synchronous
    IO Non-Alert, Non-Directory File, Attributes: n/a, ShareMode: Read, Write, AllocationSize: n/a"
    "21:51:06.3129550","sqlservr.exe","1784","CreateFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health*_0_9223372036854775807.xel","NAME INVALID","Desired Access: Read Attributes, Synchronize, Dis, Options: Synchronous
    IO Non-Alert, Attributes: n/a, ShareMode: Read, Write, AllocationSize: n/a"
    "21:51:06.3131669","sqlservr.exe","1784","CreateFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health*_0_9223372036854775807.xel","NAME INVALID","Desired Access: Read Attributes, Synchronize, Dis, Options: Synchronous
    IO Alert, Attributes: n/a, ShareMode: Read, Write, AllocationSize: n/a"
    "21:51:06.3133810","sqlservr.exe","1784","CreateFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health*_0_9223372036854775807.xel","NAME INVALID","Desired Access: Read Attributes, Synchronize, Dis, Options: Synchronous
    IO Non-Alert, Open Reparse Point, Attributes: N, ShareMode: Read, Write, AllocationSize: n/a"
    "21:51:06.3227988","sqlservr.exe","1784","QueryDirectory","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health*.xel","SUCCESS","Filter: system_health*.xel, 1: system_health_0_130313895108460000.xel"
    "21:51:06.3233465","sqlservr.exe","1784","CreateFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130313895108460000.xel","SUCCESS","Desired Access: Generic Read, Dis, Options: Sequential Access, Synchronous
    IO Non-Alert, Non-Directory File, Open No Recall, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a, OpenResult: Opened"
    "21:51:06.3234822","sqlservr.exe","1784","QueryStandardInformationFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130313895108460000.xel","SUCCESS","AllocationSize: 1,019,904, EndOfFile: 1,019,904, NumberOfLinks:
    1, DeletePending: False, Directory: False"
    "21:51:06.3235452","sqlservr.exe","1784","CreateFileMapping","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130313895108460000.xel","FILE LOCKED WITH ONLY READERS","SyncType: SyncTypeCreateSection, PageProtection: "
    "21:51:06.3235944","sqlservr.exe","1784","QueryStandardInformationFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130313895108460000.xel","SUCCESS","AllocationSize: 1,019,904, EndOfFile: 1,019,904, NumberOfLinks:
    1, DeletePending: False, Directory: False"
    "21:51:06.3236933","sqlservr.exe","1784","CreateFileMapping","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130313895108460000.xel","SUCCESS","SyncType: SyncTypeOther"
    "21:51:06.3239419","sqlservr.exe","1784","CreateFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130314358881960000.xel","SUCCESS","Desired Access: Generic Read, Dis, Options: Sequential Access, Synchronous
    IO Non-Alert, Non-Directory File, Open No Recall, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a, OpenResult: Opened"
    "21:51:06.3240541","sqlservr.exe","1784","QueryStandardInformationFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130314358881960000.xel","SUCCESS","AllocationSize: 5,242,880, EndOfFile: 5,241,344, NumberOfLinks:
    1, DeletePending: False, Directory: False"
    "21:51:06.3241084","sqlservr.exe","1784","CreateFileMapping","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130314358881960000.xel","FILE LOCKED WITH ONLY READERS","SyncType: SyncTypeCreateSection, PageProtection: "
    "21:51:06.3241553","sqlservr.exe","1784","QueryStandardInformationFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130314358881960000.xel","SUCCESS","AllocationSize: 5,242,880, EndOfFile: 5,241,344, NumberOfLinks:
    1, DeletePending: False, Directory: False"
    "21:51:06.3242521","sqlservr.exe","1784","CreateFileMapping","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130314358881960000.xel","SUCCESS","SyncType: SyncTypeOther"
    "21:51:06.3245153","sqlservr.exe","1784","CreateFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130316898777600000.xel","SUCCESS","Desired Access: Generic Read, Dis, Options: Sequential Access, Synchronous
    IO Non-Alert, Non-Directory File, Open No Recall, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a, OpenResult: Opened"
    "21:51:06.3246253","sqlservr.exe","1784","QueryStandardInformationFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130316898777600000.xel","SUCCESS","AllocationSize: 954,368, EndOfFile: 952,832, NumberOfLinks: 1, DeletePending:
    False, Directory: False"
    "21:51:06.3246796","sqlservr.exe","1784","CreateFileMapping","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130316898777600000.xel","FILE LOCKED WITH ONLY READERS","SyncType: SyncTypeCreateSection, PageProtection: "
    "21:51:06.3247272","sqlservr.exe","1784","QueryStandardInformationFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130316898777600000.xel","SUCCESS","AllocationSize: 954,368, EndOfFile: 952,832, NumberOfLinks: 1, DeletePending:
    False, Directory: False"
    "21:51:06.3248247","sqlservr.exe","1784","CreateFileMapping","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130316898777600000.xel","SUCCESS","SyncType: SyncTypeOther"
    "21:51:06.3250799","sqlservr.exe","1784","CreateFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130317438925230000.xel","SUCCESS","Desired Access: Generic Read, Dis, Options: Sequential Access, Synchronous
    IO Non-Alert, Non-Directory File, Open No Recall, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a, OpenResult: Opened"
    "21:51:06.3251936","sqlservr.exe","1784","QueryStandardInformationFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130317438925230000.xel","SUCCESS","AllocationSize: 2,064,384, EndOfFile: 2,064,384, NumberOfLinks:
    1, DeletePending: False, Directory: False"
    "21:51:06.3252493","sqlservr.exe","1784","CreateFileMapping","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130317438925230000.xel","FILE LOCKED WITH ONLY READERS","SyncType: SyncTypeCreateSection, PageProtection: "
    "21:51:06.3252962","sqlservr.exe","1784","QueryStandardInformationFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130317438925230000.xel","SUCCESS","AllocationSize: 2,064,384, EndOfFile: 2,064,384, NumberOfLinks:
    1, DeletePending: False, Directory: False"
    "21:51:06.3253930","sqlservr.exe","1784","CreateFileMapping","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130317438925230000.xel","SUCCESS","SyncType: SyncTypeOther"
    "21:51:06.9404203","sqlservr.exe","1784","CloseFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130313895108460000.xel","SUCCESS",""
    "21:51:06.9405464","sqlservr.exe","1784","CloseFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130314358881960000.xel","SUCCESS",""
    "21:51:06.9406615","sqlservr.exe","1784","CloseFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130316898777600000.xel","SUCCESS",""
    "21:51:06.9408089","sqlservr.exe","1784","CloseFile","C:\Program Files\Microsoft SQL Server\MSSQL11.SQL2012ENT01\MSSQL\Log\system_health_0_130317438925230000.xel","SUCCESS",""
     

  • FIM Object Visualizer

    Name
    Latest Version
    FIM Object Visualizer
    6.0
    Description:
    The FIM Object Visualizer is a community script to display and document configurable objects such as Synchronization Rules, Workflows and Management Policy Rules:
    Display – because the script has a UI to render your configuration
    Document – because you can copy a displayed configuration to the clipboard and save it to a file.
    The script is based on the HTA (HTML Application) framework – a framework that enables you to develop scripts that look like Windows applications without the need of writing code in Visual Studio.
    Important
    To run the script, you need a FIM server with PowerShell installed.
    Please read the FIM ScriptBox Read Me First prior to running this script
    The FIM Object Visualizer is a customizable community script to display and document configurable objects such as Synchronization Rules, Workflows and Management Policy Rules.
    You can use this script to document your current FIM deployment or to provide configuration information in case of a troubleshooting scenario.
    The script consist of two main components:
    Data Request
    Data Display
    The script assumes that all PowerShell scripts that are located in the Collection folder are scripts to request object information from your FIM server.
    When you start the script, the script code locates all these scripts and adds them to the left list box in the toolbar:
    To request new or update existing object information for a specific object type, select the object type you are interested in from the list box, and then click Get Objects.
    You can extend the number of supported object types by adding additional PowerShell scripts to the Collection folder.
    The second list box lists the object types for which you have already requested object information.
    To list the display names for an object type, select the object type from the list box, and then click Get Names:
    To display the configuration of an object, click the object's display name:
    As mentioned eelier in this post, the FIM Object Visualizer is a community tool.
    This means, the objective of this download is to get you started with the process of documenting your deployment; however, I expect that you will modify the components of this script.
    For example, if you don't like the "look & feel" of how an object type is rendered, you can easily customize it by modifying the related XSLT file.
    If you have questions, comments or even extensions for this script, please respond to this post.
    To download this script, use this link.
    To get to the FIM ScriptBox, use this link.
    Markus Vilcinskas, Technical Content Developer, Microsoft Corporation

    The goal of this script is to enable you to create reports of various configurations.
    The most recent version supports the following reports:
    Active Metaverse Schema
    Attribute Flow Precedence
    FIMMA Schema
    FIM Resource
    Management Policy Rules
    Metaverse Schema
    Provisioning Triple
    Schema Object Definitions
    Selected Management Agent Attributes
    Synchronization Rules
    Replication Configuration
    Workflows
    Below are some examples for what you can do with this script and also abbreviated examples
    Active Metaverse Schema - This report shows  the inbound population of your metaverse grouped by object type:
    Metaverse Active Schema Configuration
    Metaverse object type: group
    Metaverse Attribute
    Type
    Multi-valued
    Indexed
    Import-Flows
    membershipLocked
    Boolean
    no
    no
    1
    membershipAddWorkflow
    String (non-indexable)
    no
    no
    1
    domain
    String (non-indexable)
    no
    no
    1
    accountName
    String (non-indexable)
    no
    no
    1
    member
    Reference (DN)
    yes
    no
    1
    type
    String (non-indexable)
    no
    no
    1
    scope
    String (non-indexable)
    no
    no
    1
    displayName
    String (non-indexable)
    no
    no
    1
    csObjectID
    String (non-indexable)
    no
    no
    1
    Replication Configuration - This report shows your active metaverse schema configuration and whether an export attribute flow rule exists on the FIM MA for each metaverse attribute
    Metaverse Active Schema and FIMMA EAF Configuration
    Metaverse object type: group
    Metaverse Attribute
    Type
    Multi-valued
    Indexed
    Import-Flows
    Replicated
    membershipLocked
    Boolean
    no
    no
    1
    yes
    membershipAddWorkflow
    String (non-indexable)
    no
    no
    1
    yes
    domain
    String (non-indexable)
    no
    no
    1
    yes
    accountName
    String (non-indexable)
    no
    no
    1
    no
    member
    Reference (DN)
    yes
    no
    1
    no
    type
    String (non-indexable)
    no
    no
    1
    yes
    scope
    String (non-indexable)
    no
    no
    1
    yes
    displayName
    String (non-indexable)
    no
    no
    1
    yes
    csObjectID
    String (non-indexable)
    no
    no
    1
    no
    Attribute Flow Precedence - This report shows how each attribute in the metaverse is populated and the order:
    Metaverse Attribute Flow Configuration for group
    accountName, ranked
    Management Agent
    Object Type
    Type
    Source Attributes
    Fabrikam ADMA
    group
    sr
    sAMAccountName
    scope, ranked
    Management Agent
    Object Type
    Type
    Source Attributes
    Fabrikam ADMA
    group
    sr
    CustomExpression(IIF(Eq(BitAnd(2,groupType),2),"Global",IIF(Eq(BitAnd(4,groupType),4),"DomainLocal","Universal")))
    type, ranked
    Management Agent
    Object Type
    Type
    Source Attributes
    Fabrikam ADMA
    group
    sr
    CustomExpression(IIF(Eq(BitOr(14,groupType),14),"Distribution","Security"))
    FIMMA Schema - This report shows the schema definition of your FIMMA:
    FIM MA Schema
    Object type: Group
    Attribute Name
    Data Type
    Required
    Multi-Valued
    AccountName
    String
    no
    no
    CreatedTime
    DateTime
    yes
    no
    Creator
    Reference
    no
    no
    DeletedTime
    DateTime
    no
    no
    Description
    String
    no
    no
    FIM Resource - This report shows the generic representation of an object in the FIM data store:
    Export Object - Person
    ObjectID
    7fb2b853-24f0-4498-9534-4e10589723c4
    AccountName
    administrator
    CreatedTime
    1/20/2010 11:33:37 AM
    Creator
    7fb2b853-24f0-4498-9534-4e10589723c4
    DisplayName
    administrator
    Domain
    FABRIKAM
    DomainConfiguration
    1aff46f4-5511-452d-bcbd-7f7b34b0fe14
    MailNickname
    administrator
    MVObjectID
    {1FDD4880-9B68-4509-BAB1-AC34ABF50AC1}
    ObjectSID
    AQUAAAAAAAUVAAAAVn2Q+4bZuFuYINe99AEAAA==
    ObjectType
    Person
    Markus Vilcinskas, Knowledge Engineer, Microsoft Corporation

  • Using JSTL variables in JSP or Javascript. Possible ?

    Hi All,
    Is it possible to share or use the variables which are declared are used by JSTL in JSP expression or scriplet code and in Java Script.
    Example:
    This Works:
    <fmt:set var="test" value="JSTL" />
    <fmt:out value="${test}" />
    But, this gives error:
    <% out.println(test) %>
    And passing the value of variable 'test' to Java Script code also gives error.
    How to use JSTL variables in JSP and in Javascript ?
    Yours,
    Sankar.B

    By default, JSTL variables are kept in servlet
    attributes. Default is to store it in the page
    context. You can make it request/session/application
    scope as required by an attribute of the set tag.Hi there,
    Can anyone advise how to access JSP variables in JSTL?
    Can it be done as the same method through request/session/application scope?
    Thnks...

  • PPM - Object Links

    Hi All,
    The possible mapping for the scenario mentioned below?
    1)     Create Portfolio
    2)     Create Bucket Structure for the Portfolio
    3)     Assign three Initiatives EX. IN1, IN2, IN3.
    4)     Create One Item for each Initiative IT1, IT2, IT3.
    5)     No Auto PS Project has to be created for this.
    6)     Create a PS Project manually with three top WBS Elements with subordinates and linking those WBS Elements
                         each to IT1-W1, IT2-W2, IT3-W3.  (By std we can only assign one project definition to the portfolio Item). I
                         assigned this through Object Links
    7)     Now the dates, status and Planned/Actual Costs/Budgets should transfer to Portfolio items and also dates/status
                         should also get synchronized between WBS Elements and Items.
    How this scenario will be possible. Is there a possiblity to synchronize the dates/status of Object linked WBS element to portfolio Item?
    PS: I can create only on PS Project because of Synergy planning.
    Rgds
    Rambo

    Hi John,
    valid questions.
    re 1 (using 'object links' without DFM): I would think that you need to develop the attribute synchronization more or less from scratch in that case. We have done this for other customers before (e.g. in 4.5 when the DFM functionality was not availble for PS yet). Depending on the exact requirements, the effort can become quite significant.
    re 2 (object synchronization structure): this customizing is only relevant when you use DFM. And even then I would rather not change anything there. Basically the customizing activity allows you to use different input/output structure for DFM (that might be interesting when you connect multiple ERP systems with different releases to PPM through DFM). BUt so far I had never the need to change the structures.
    I hope this helps you at least a bit (even though it is not a straightforward answer).
    Best regards
    Thorsten

  • How to automatically create the custom migration scripts after recreating SSMA project?

    How to automatically create the custom data migration scripts after recreating SSMA project?
    There is number of tables ( big tables with BLOBS)  which I want to set up automatically to be migrated with custom migration scripts (replacing e.g. attribute named "FILE" with "TO_BLOB('') AS FILE" ).
    So the question is how to open MB file (I think that it should be standard db of some destktop RDBMS) ? 

    Hi Roman.Pokrovskij,
    According
    to your description, we can use SSMA tool to migrate data from one database (including Access, Oracle and so on) to SQL Server via GUI or the scripts. There is an example about migrating Access database to SQL Server via the
    custom migration scripts, you can review refer to them.
    <?xml version="1.0" encoding="utf-8"?>
    <ssma-script-file xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="C:\Microsoft SQL Server Migration Assistant for Access\Schemas\A2SSConsoleScriptSchema.xsd">
    <config>
    <output-providers>
    <output-window suppress-messages="false"
    destination="stdout"/>
    <upgrade-project action="yes"/>
    <data-migration-connection source-use-last-used="true"
    target-server="target_1"/>
    <progress-reporting enable="false"
    report-messages="false"
    report-progress="off"/>
    <object-overwrite action="skip" />
    </output-providers>
    </config>
    <servers>
    <!-- Server definition for Sql server target server-->
    <sql-server name="target_1">
    <sql-server-authentication>
    <server value="$SQLServerName$"/>
    <database value="$SQLServerDb$"/>
    <user-id value="$SQLServerUsrID$"/>
    <password value="$SQLServerPassword$"/>
    <encrypt value="true"/>
    <trust-server-certificate value="true"/>
    </sql-server-authentication>
    </sql-server>
    </servers>
    <script-commands>
    <create-new-project project-folder="$project_folder$ "
    project-name="$project_name$"
    overwrite-if-exists="true"/>
    <connect-target-database server="target_1"/>
    <load-access-database database-file="$AccessDbFolder$\$AccessDatabaseFile$"/>---
    <!--Schema Mapping-->
    <map-schema source-schema="$AccessDatabase$" sql-server-schema="$SQLServerDb$.dbo" />
    <!-- Convert schema -->
    <!-- Example: Convert entire Schema (with all attributes)-->
    <convert-schema object-name="$AccessDatabase$"
    object-type="Databases"
    conversion-report-overwrite="true"
    verbose="true"
    report-errors="true" />
    <!-- Synchronize target -->
    <!-- Example: Synchronize target entire Database with all attributes-->
    <synchronize-target object-name="$SQLServerDb$.dbo"
    on-error="fail-script" />
    <!-- Data Migration-->
    <!--Example: Data Migration of all tables in the schema (with all attributes)-->
    <migrate-data object-name="$AccessDatabase$.Tables"
    object-type="category"
    report-errors="true"
    verbose="true"/>
    </script-commands>
    </ssma-script-file>
    There is a similar scripts about migrating Oracle database to SQL Server, you can use powershell script to automatically run the console for scripts/variable files, saved in the specified folder. For more information, review the following
    article.
    http://blogs.msdn.com/b/ssma/archive/2010/09/09/performing-database-migration-assessment-using-ssma-console-application.aspx
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Using MicroSoft. XMLHttp in jsp or  java

    Hi,
    Can anybody tell me is it good to use
    MicroSoft. XMLHttp in jsp and java programs for connectint to remote url and sending xml data..(in java script fucntions)
    regards,

    By default, JSTL variables are kept in servlet
    attributes. Default is to store it in the page
    context. You can make it request/session/application
    scope as required by an attribute of the set tag.Hi there,
    Can anyone advise how to access JSP variables in JSTL?
    Can it be done as the same method through request/session/application scope?
    Thnks...

  • Error in SSMA console

    Hi ,
    I am getting the following vague error when i am executing the script using SSMA console application
    SSMAforOracleConsole.exe –s "C:\Users\Arup\Desktop\SQL Server\Console Application\ConversionAndDataMigrationSample_New.xml"
    FATALERR invalid argument used.
    Any help is highly appreciated

    Hi Lydia,
    Thanks for your Reply.Pfb the code i am using for my ssma migration.
    <?xml version="1.0" encoding="utf-8"?>
    <!--
    Script file for SSMA-v4.2 Console for Oracle.
    Commands execution order - from top to bottom.
    Command Processor distinguishes each command by element name.
    The element name is invariable! Never modify it!
    Use this file name as the parameter to SSMA-v4.2 Console for Oracle with mandatory
    option -s[cript]. See the documentation for SSMA-v4.2 Console for more information.
    -->
    <ssma-script-file xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="C:\Microsoft SQL Server Migration Assistant for Oracle\Schemas\O2SSConsoleScriptSchema.xsd">
    <!-- Variable values should mandatorily start and end with "$".
    These values can be defined in a separate variable value file
    (See :VariableValueFileSample.xml)
    ********** Set the variable values used by this sample **********
    ********** file in the corresponding Variables Value File **********
    -->
    <!-- The escape character for “$” is “$$”.If the value of a static value of a parameter begins with “$”,
    then "$$" must be specified to treat it as a static value instead of a variable. -->
    <!-- Optional section with console configuration options-->
    <config>
    <output-providers>
    <!-- Command specific messages do not appear on console if value set to "true".
    Attributes: destination="file" (optional)
    file-name="<file-path>" (optional along with destination attribute)
    By default destination="stdout" and suppress-messages="false" -->
    <output-window suppress-messages="false"
    destination="file"/>
    <!-- Enables upgradation of project created from earlier version of SSMA to current version.
    Available action attribute values
    • yes - upgrades the project (default)
    • no - displays an error and halts the execution
    • ask-user - prompts user for input (yes or no). -->
    <upgrade-project action="yes"/>
    <!-- Enables upgradation of project created from earlier version of SSMA to current version.
    Available action attribute values
    • yes - upgrades the project (default)
    • no - displays an error and halts the execution
    • ask-user - prompts user for input (yes or no). -->
    <upgrade-project action="yes"/>
    <!--Enables creation of database during connection. By default, mode is error..
    Available mode values
    • ask-user - prompts user for input (yes or no).
    • error - console displays error and halts the execution.
    • continue - console continues the execution.-->
    <!--<user-input-popup mode="continue" />-->
    <!-- Data migration connection parameters
    Specifies which source or target server to be considered for data migration
    Attributes: source-use-last-used="true" (default) or source-server="source_servername"
    target-use-last-used="true" (default) or target-server="target_servername" -->
    <data-migration-connection source-use-last-used="false"
    target-server="target_1"/>
    <!-- Progress Reporting. By default progress reporting is disabled.
    report-progress attribute values
    • off
    • every-1%
    • every-2%
    • every-5%
    • every-10%
    • every-20% -->
    <progress-reporting enable="false"
    report-messages="false"
    report-progress="off"/>
    <!-- Reconnect manager -->
    <!-- Reconnection parameter settings incase of connection failure
    Available reconnection modes
    • reconnect-to-last-used-server - If the connection is not alive it tries to reconnect to last used server
    • generate-an-error - If the connection is not alive it throws an error(default)
    <reconnect-manager on-source-reconnect="reconnect-to-last-used-server"
    on-target-reconnect="generate-an-error"/>-->
    <!-- Prerequisites display options.
    If strict-mode is true, an exception is thrown in case of prerequisite failures-->
    <!--<prerequisites strict-mode="true"/>-->
    <!-- Object overwrite during conversion.By default, action is overwrite
    Available action values
    • error
    • overwrite
    • skip
    • ask-user -->
    <object-overwrite action="skip" />
    <!-- Sets log verbosity level. By default log verbosity level is "error"
    Available logger level options:
    • fatal-error - Only fatal-error messages are logged
    • error - Only error and fatal-error messages are logged
    • warning - All levels except debug and info messages are logged
    • info - All levels except debug messages are logged
    • debug - All levels of messages logged
    Note: Mandatory messages are logged at any level-->
    <!--<log-verbosity level="error"/>-->
    <!-- Override encrypted password in protected storage with script file password
    Default option: "false" - Order of search: 1) Protected storage 2) Script File / Server Connection File 3) Prompt User
    "true" - Order of search: 1) Script File / Server Connection File 2) Prompt User -->
    <!--<encrypted-password override="true"/>-->
    </output-providers>
    </config>
    <!-- Optional section with server definitions -->
    <!-- Note: Server definitions can be declared in a separate file
    or can be embedded as part of script file in the servers section (below)-->
    <servers>
    <sql-server name="target_1">
    <windows-authentication>
    <server value="ARUP-MANISHA\SQLEXPRESS"/>
    <database value="AdventureWorks2012"/>
    <encrypt value="true"/>
    <trust-server-certificate value="true"/>
    </windows-authentication>
    </sql-server>
    <oracle name="source_1">
    <standard-mode>
    <connection-provider value ="OracleClient"/>
    <host value="arup-manisha" />
    <port value="1521" />
    <instance value="XE" />
    <user-id value="arup" />
    <password value="arup"/>
    </standard-mode>
    </oracle>
    <script-commands>
    <!--Create a new project.
    • Customize the new project created with project-folder and project-name attributes.
    • overwrite-if-exists attribute can take values "true/false" with default as "false".
    • project-type (optional attribute) can take values
    sql-server-2005 - Creates SSMA 2005 project
    sql-server-2008 - Creates SSMA 2008 project (default)
    sql-server-2012 - Creates SSMA 2012 project
    sql-server-2014 - Creates SSMA 2014 project
    sql-azure - Creates SSMA Azure project -->
    <create-new-project project-folder="C:\Users\Arup\Documents\SSMAProjects"
    project-name="SqlMigration5"
    overwrite-if-exists="true"
    project-type="sql-server-2012"
    />
    <!-- Connect to source database -->
    <!-- • Server(id) needs to mandatorily be defined in the servers section of the
    script file or in the Servers Connection File-->
    <connect-source-database server="source_1" />
    <!-- Connect to target database -->
    <!-- • Server(id) needs to mandatorily be defined in the servers section of the
    script file or in the Servers Connection File-->
    <connect-target-database server="target_1" />
    <!--Schema Mapping-->
    <!-- • source-schema specifies the source schema we intend to migrate.
    • sql-server-schema specifies the target schema where we want it to be migrated.-->
    <map-schema source-schema="arup"
    sql-server-schema="arup.dbo" />
    <!--Refresh from database-->
    <!-- Refreshes the source database
    • object-name specifies the object(s) considered for refresh .
    (can have indivdual object names or a group object name)
    • object-type specifies the type of the object specified in the object-name attribute.
    (if object category is specified then object type will be "category")
    • on-error specifies whether to specify refresh errors as warnings or error.
    Available options for on-error:
    •report-total-as-warning
    •report-each-as-warning
    •fail-script
    • report-errors-to specifies location of error report for the refresh operation (optional attribute)
    if only folder path is given, then file by name SourceDBRefreshReport.XML is created -->
    <!-- Example1: Refresh entire Schema (with all attributes)-->
    <!--<refresh-from-database object-name="$OracleSchemaName$"
    object-type ="Schemas"
    on-error="fail-script"
    report-errors-to="$RefreshDBFolder$" /> -->
    <!-- Example2: Refresh a particular category say a procedure (other convention of the command with only mandatory attributes)-->
    <!--<refresh-from-database>
    <metabase-object object-name="$OracleSchemaName$.Testproc" object-type="Procedures"/>
    </refresh-from-database>-->
    <!-- Convert schema -->
    <!-- • object-name specifies the object(s) considered for conversion .
    (can have indivdual object names or a group object name)
    • object-type specifies the type of the object specified in the object-name attribute.
    (if object category is specified then object type will be "category")
    • conversion-report-folder specifies folder where the conversion report can to be stored.(optional attribute)
    • conversion-report-overwrite specifies whether to overwrite the conversion report folder if it already exists.
    Default value: false. (optional attribute)
    • write-summary-report-to specifies the path where the summary report will be generated.
    If only the folder path is mentioned,then file by name SchemaConversionReport.XML is created. (optional attribute)
    • Summary report creation has 2 further sub-categories
    • report-errors (="true/false", with default as "false" (optional attributes))
    • verbose (="true/false", with default as "true/false" (optional attributes))
    -->
    <!-- Example1: Convert entire Schema (with all attributes)-->
    <convert-schema object-name="$OracleSchemaName$"
    object-type="Schemas"
    write-summary-report-to="$SummaryReports$"
    verbose="true"
    report-errors="true"
    conversion-report-folder="$ConvertARReportsFolder$"
    conversion-report-overwrite="true" />
    <!-- Example2: Convert entire Schema (only with mandatory attributes)-->
    <!--<convert-schema object-name="$OracleSchemaName$"
    object-type="Schemas" />-->
    <!-- alternate convention for ConvertSchema command-->
    <!-- Example3: Convert a specific category(say Tables)-->
    <!--<convert-schema>
    <metabase-object object-name="$OracleSchemaName$.Tables"
    object-type="category" />
    </convert-schema>-->
    <!-- Example4: Convert Schema for a specific object(say Table)
    (with only a few optional attributes & write-summary-report-to with a file name)-->
    <!--<convert-schema object-name="$OracleSchemaName$.TestTbl"
    object-type="Tables"
    write-summary-report-to="$SummaryReports$\ConvertSchemaReport1.xml"
    report-errors="true"
    />-->
    <!-- Synchronize target -->
    <!-- • object-name specifies the object(s) considered for synchronization.
    (can have indivdual object names or a group object name)
    • object-type specifies the type of the object specified in the object-name attribute.
    (if object category is specified then object type will be "category")
    • on-error specifies whether to specify synchronization errors as warnings or error.
    Available options for on-error:
    •report-total-as-warning
    •report-each-as-warning
    •fail-script
    • report-errors-to specifies location of error report for the synchronization operation (optional attribute)
    if only folder path is given, then file by name TargetSynchronizationReport.XML is created.
    -->
    <!-- Example1: Synchronize target entire schema of Database with all attributes-->
    <synchronize-target object-name="arup.dbo"
    on-error="fail-script"
    report-errors-to="$SynchronizationReports$" />
    <!-- Example2: Synchronizing a particular category (say Procedures) of the schema alone -->
    <!--<synchronize-target object-name="$SQLServerDb$.dbo.Procedures"
    object-type="category" />-->
    <!--(alternative convention for Synchronize target command)-->
    <!-- Example3: Synchronization target of individual objects -->
    <!--<synchronize-target>
    <metabase-object object-name="$SQLServerDb$.dbo.TestTbl"
    object-type="Tables" />
    </synchronize-target>-->
    <!-- Example4: Synchronization of individual objects with no object-type attribute-->
    <!--<synchronize-target>
    <metabase-object object-name="$SQLServerDb$.dbo.TestTbl" />
    </synchronize-target>-->
    <!-- Save As Script-->
    <!-- Used to save the Scripts of the objects to a file mentioned
    when metabase=target ,this is an alternative to synchronization command where in we get the scripts and execute the same on the target database.
    • object-name specifies the object(s) whose scripts are to be saved .
    (can have indivdual object names or a group object name)
    • object-type specifies the type of the object specified in the object-name attribute.
    (if object category is specified then object type will be "category")
    • destination specifies the path or the folder where the script has to be saved, if the file name is not given then a file name in the format (object_name attribute value).out
    • metabase specifies whether it ithe source or target metabase.
    • overwrite if true then it overwrites if same filename exist , It can values (true/false) -->
    <!-- Example1 : Save as script from source metabase-->
    <!-- <save-as-script destination="$SaveScriptFolder$\Script1.sql"
    metabase="source"
    object-name="$OracleSchemaName$"
    object-type="Schemas"
    overwrite="true" />-->
    <!-- Example2 : Save as script from target metabase-->
    <!-- <save-as-script metabase="target" destination="$SaveScriptFolder$\Script2.sql" >
    <metabase-object object-name="$SQLServerDb$" object-type ="Databases"/>
    </save-as-script> -->
    <!-- Data Migration-->
    <!-- • object-name specifies the object(s) considered for data migration .
    (can have indivdual object names or a group object name)
    • object-type specifies the type of the object specified in the object-name attribute.
    (if object category is specified then object type will be "category")
    • write-summary-report-to specifies the path where the summary report will be generated.
    If only the folder path is mentioned,then file by name DataMigrationReport.XML is created. (optional attribute)
    • Summary report creation has 2 further sub-categories
    • report-errors (="true/false", with default as "false" (optional attributes))
    • verbose (="true/false", with default as "false" (optional attributes))
    -->
    <!--Example1: Data Migration of all tables in the schema (with all attributes)-->
    <migrate-data object-name="arup.Tables"
    object-type="category"
    write-summary-report-to="$SummaryReports$"
    report-errors="true"
    verbose="true" />
    <!--alternative convention for Data Migration Command-->
    <!--Example2: Data Migration of specific tables with no object-type attribute & write-summary-report-to with a file name -->
    <!--<migrate-data write-summary-report-to="$SummaryReports$\datamigreport.xml"
    verbose="true">
    <metabase-object object-name="$OracleSchemaName$.TestTbl" />
    </migrate-data>-->
    <!-- Save project -->
    <save-project />
    <!-- Close project -->
    <close-project />
    </script-commands>
    </ssma-script-file>

  • Cannot instantly delete/edit .exe files.

    Hello, 
    Problem is that my .exe files are held up by something invisible and only edit seeming instantly working is renaming.
    Firstly - i'm the only one using this laptop, i'm admin, running programs as admin and this still happens. Used all available Avast virus scans - found some malware, but didn't remove the problem.
    When i try to shift-delete them it takes 10 to 60 seconds and a bunch of retries for it to actually disappear... OK - i could live with it... But true problem is when trying to replace with some newer version - windows explorer says "used by other program",
    DevC++ says "Permission denied", Code::Blocks - "Permission denied", MS Visual C++ 2010 Express - "cannot open file"... This is extremely irritating when trying to program and debug...
    Using ASUS K55VD laptop i5 version with Win7 Pro.
    Tried stripping all non-essential or less trusted processes - didn't help.
    Tried disabling a bunch of services - didn't help.
    PLEASE HELP! ^^
    Don't want to re-install windows - have too many programs on C disk that i don't really want to loose/re-install...

    Wow that procmon creates huge logs fast... 
    Checked antivirus - even when fully disabled it doesn't help.
    Now sifting trough procmon logs: A LOT of registry accesses by svchost.exe to HKCR\CLSID\{54D8502C-527D-43F7-A506-A9DA075E229C} areas like: InprocServer32\(Default); InprocServer32; InProcServer32
    Edit: OK definitely didn't study enough yet... ^^ at least of what's up with these accesses. Could You give any pointers of what to look for?
    Well this problem is not present in safe mode, but not a fan of going into safe mode everytime i need to program something.
    Edit#2: now a bunch of this happens: Explorer.EXE CreateFile C:\users\edgetech\appdata\local\microsoft\office\groove\user\GFSConfig.xml PATH NOT FOUND (i deleted folder to see maybe it'd stop)
    Edit #3: What happens when i try to manually overwrite:
    22:44:08,7473442 Explorer.EXE 1656 CreateFile C:\ SUCCESS Desired Access: Read Data/List Directory, Read Attributes, Synchronize, Dis, Options: Synchronous IO Non-Alert, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a, OpenResult: Opened
    22:44:08,7473992 Explorer.EXE 1656 FileSystemControl C:\ INVALID DEVICE REQUEST Control: FSCTL_LMR_QUERY_DEBUG_INFO
    22:44:08,7474185 Explorer.EXE 1656 QueryDirectory C:\Dev-Cpp SUCCESS Filter: Dev-Cpp, 1: Dev-Cpp
    22:44:08,7474595 Explorer.EXE 1656 CloseFile C:\ SUCCESS
    22:44:08,7475646 Explorer.EXE 1656 CreateFile C:\Dev-Cpp SUCCESS Desired Access: Read Data/List Directory, Read Attributes, Synchronize, Dis, Options: Synchronous IO Non-Alert, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a, OpenResult: Opened
    22:44:08,7476020 Explorer.EXE 1656 FileSystemControl C:\Dev-Cpp INVALID DEVICE REQUEST Control: FSCTL_LMR_QUERY_DEBUG_INFO
    22:44:08,7476184 Explorer.EXE 1656 QueryDirectory C:\Dev-Cpp\edgetech SUCCESS Filter: edgetech, 1: edgetech
    22:44:08,7476484 Explorer.EXE 1656 CloseFile C:\Dev-Cpp SUCCESS
    22:44:08,7578214 Explorer.EXE 1656 CreateFile C:\Dev-Cpp\edgetech IS DIRECTORY Desired Access: Generic Read/Write, Dis, Options: No Buffering, Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode: None, AllocationSize: n/a
    22:44:08,7579011 Explorer.EXE 1656 CreateFile C:\Dev-Cpp\edgetech IS DIRECTORY Desired Access: Generic Read, Dis, Options: No Buffering, Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode: Read, Write, Delete, AllocationSize: n/a
    22:44:08,7579536 Explorer.EXE 1656 CreateFile C:\Dev-Cpp\edgetech IS DIRECTORY Desired Access: Read Attributes, Synchronize, Dis, Options: No Buffering, Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode: Read, Write, Delete, AllocationSize: n/a
    22:44:08,7610124 Explorer.EXE 1656 QueryStandardInformationFile C:\Users\edgetech\AppData\Local\Microsoft\Windows\Explorer\thumbcache_idx.db SUCCESS AllocationSize: 28.672, EndOfFile: 25.880, NumberOfLinks: 1, DeletePending: False, Directory: False
    22:44:09,4927768 Explorer.EXE 1656 CreateFile C:\Dev-Cpp\edgetech\bugfixing.exe SHARING VIOLATION Desired Access: Generic Read/Write, Write DAC, Dis, Options: Sequential Access, Synchronous IO Non-Alert, Non-Directory File, Attributes: A, ShareMode: None, AllocationSize: 15.839
    22:44:09,4932046 Explorer.EXE 1656 CreateFile C:\Dev-Cpp\edgetech\bugfixing.exe SHARING VIOLATION Desired Access: Generic Read/Write, Write DAC, Dis, Options: Sequential Access, Synchronous IO Non-Alert, Non-Directory File, Attributes: A, ShareMode: Read, Write, AllocationSize: 15.839
    22:44:09,4934562 Explorer.EXE 1656 CreateFile C:\Dev-Cpp\edgetech SUCCESS Desired Access: Read Data/List Directory, Read Attributes, Synchronize, Dis, Options: Synchronous IO Non-Alert, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a, OpenResult: Opened
    22:44:09,4935453 Explorer.EXE 1656 FileSystemControl C:\Dev-Cpp\edgetech INVALID DEVICE REQUEST Control: FSCTL_LMR_QUERY_DEBUG_INFO
    22:44:09,4935819 Explorer.EXE 1656 QueryDirectory C:\Dev-Cpp\edgetech\bugfixing.exe SUCCESS Filter: bugfixing.exe, 1: bugfixing.exe
    22:44:09,4936603 Explorer.EXE 1656 CloseFile C:\Dev-Cpp\edgetech SUCCESS
    Edit #4: Again tries SAME copy action but got different results (?):
    22:52:00,9589512 DllHost.exe 1744 CreateFile C:\Dev-Cpp\edgetech\bugfixing.exe DELETE PENDING Desired Access: Generic Read/Write, Write DAC, Dis, Options: Sequential Access, Synchronous IO Non-Alert, Non-Directory File, Attributes: A, ShareMode: None, AllocationSize: 15.839
    22:52:00,9593026 DllHost.exe 1744 CreateFile C:\Dev-Cpp\edgetech\bugfixing.exe DELETE PENDING Desired Access: Read Attributes, Dis, Options: Open Reparse Point, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a
    22:52:00,9594089 DllHost.exe 1744 CreateFile C:\Dev-Cpp\edgetech\bugfixing.exe DELETE PENDING Desired Access: Read Attributes, Dis, Options: Open Reparse Point, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a
    22:52:00,9595042 DllHost.exe 1744 CreateFile C:\Dev-Cpp\edgetech\bugfixing.exe DELETE PENDING Desired Access: Read Attributes, Dis, Options: Open Reparse Point, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a
    22:52:00,9610752 DllHost.exe 1744 CreateFile C:\ SUCCESS Desired Access: Read Data/List Directory, Read Attributes, Synchronize, Dis, Options: Synchronous IO Non-Alert, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a, OpenResult: Opened
    22:52:00,9611298 DllHost.exe 1744 FileSystemControl C:\ INVALID DEVICE REQUEST Control: FSCTL_LMR_QUERY_DEBUG_INFO
    22:52:00,9611516 DllHost.exe 1744 QueryDirectory C:\Dev-Cpp SUCCESS Filter: Dev-Cpp, 1: Dev-Cpp
    22:52:00,9611980 DllHost.exe 1744 CloseFile C:\ SUCCESS
    22:52:00,9613063 DllHost.exe 1744 CreateFile C:\Dev-Cpp SUCCESS Desired Access: Read Data/List Directory, Read Attributes, Synchronize, Dis, Options: Synchronous IO Non-Alert, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a, OpenResult: Opened
    22:52:00,9613466 DllHost.exe 1744 FileSystemControl C:\Dev-Cpp INVALID DEVICE REQUEST Control: FSCTL_LMR_QUERY_DEBUG_INFO
    22:52:00,9613654 DllHost.exe 1744 QueryDirectory C:\Dev-Cpp\edgetech SUCCESS Filter: edgetech, 1: edgetech
    22:52:00,9613999 DllHost.exe 1744 CloseFile C:\Dev-Cpp SUCCESS
    22:52:00,9644870 DllHost.exe 1744 CreateFile C:\Dev-Cpp\edgetech SUCCESS Desired Access: Read Attributes, Read Control, Dis, Options: Open Reparse Point, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a, OpenResult: Opened
    22:52:00,9645285 DllHost.exe 1744 QuerySecurityFile C:\Dev-Cpp\edgetech BUFFER OVERFLOW Information: Owner
    22:52:00,9645503 DllHost.exe 1744 QuerySecurityFile C:\Dev-Cpp\edgetech SUCCESS Information: Owner
    22:52:00,9645687 DllHost.exe 1744 CloseFile C:\Dev-Cpp\edgetech SUCCESS
    22:52:00,9687248 DllHost.exe 1744 QueryStandardInformationFile C:\Users\edgetech\AppData\Local\Microsoft\Windows\Explorer\thumbcache_idx.db SUCCESS AllocationSize: 28.672, EndOfFile: 25.880, NumberOfLinks: 1, DeletePending: False, Directory: False
    At this point i'm definitely thinking windows re-install would be easier/faster. Gonna take a break for today and check more tomorrow with a clear head...

  • Contradicting info about resources accessing

    Hello,
    I am a servlet newbie. I have gone through a lot of posts
    in the archives on accessing resources and now I am confused
    more than ever. The problem is that my guestbook servlet's doPost() will
    read and write a file on the same server.
    Now some posts say it should be rare that the servlet is called by two users at the same
    time and you shouldn't be worried about file locking. While other posts
    imply I need to make the servlet a single Thread servlet for this to work.
    And yet another group of posts suggest I could use normal servlet and
    synchronize the methods that read and write.
    So what exactly is the way to go about read and write one file?
    Any hints would be much appreciated!

    Now some posts say it should be rare that the servlet
    is called by two users at the same
    time and you shouldn't be worried about file locking.Very dangerous. If your app allows concurrent users, you must deal with concurrency issues.
    While other posts
    imply I need to make the servlet a single Thread
    servlet for this to work.If every user's session writes to the same file, then a single thread servlet will not solve the problem. You'll just have multiple instances of a servlet writing to one file instead of multiple threads of the same instance writing to one file.
    And yet another group of posts suggest I could use
    normal servlet and
    synchronize the methods that read and write. Sounds like the safest approach. If the same flat file is used by all, then synchronization is necessary. Create a class that handles all i/o to your file and add the synchronization code there.

Maybe you are looking for

  • Re-Downloading Purchased Items When Your Hard Drive is Wiped

    Hello everyone within Apple staff and everyone within the Apple community. I have just finally recovered my lost apple account that was started back in 2006 and was lost in 2008. I have used the account since about January 2008 because my laptop cras

  • Help me with the following reports

    Hello Experts, I need help in designing the following reports. Please let me know the logic, tables, any examples to be used while developing the following reports. 1. report that lists plants and volumes from deliveries. 2. report to display the Pen

  • Call CRM UI from report ecc

    Hi all, i have to call crm ui component BT115QS_SLSQ   from custom report ecc passing quotation number . I  run the standard report BSP_WD_APPL_STARTER  with   param : BSP Application =   BT115QS_SLSQ In Client              = '001' but i have an erro

  • ITunes 6 and QT 7.0.3

    Okay, so I downloaded the new iTunes / QT and installed, all seemed well, until I loaded it up. :O ERROR install the new QT. So I looked for a standalone, found it, was happy for a bit, then tried it again, and low and behold... SAME ERROR. Even unde

  • N95-2 Song List Refresh Failure

    Initial attempts to transfer a few songs from my PC using Nokia music manager to my new N95-2 were successful. I have now transferred 700 songs (the files can be viewed and played using file manager) but when I refresh the song list it doesn't seem a