ODI Control Mechanism

I need to design a control mechanism for ODI(Sunopsis) if Any one design it for their client such document is highly appreciated.
I need following steps to include in the control mechanism
1.Organize Workflow
2.Restart strategy for Scenario
3.Log management for insert rows, start and end time.(Client is not interested in using operator for logs)
Thanks
Saif

Hi ,
The username you are connecting to from backend and fireing the query and its a success,
Is the same username being used for connection through the topology manager?
Reshma

Similar Messages

  • Access Control Mechanism in JSF

    Hi all ,
    I am working on a project (still in analysis phase) and have decided to go for JSF over struts. As i have found that JSF has good advantages over struts. I am searching for a good model (with code) to implement a robust access control mechanism that is in place with JSF. I hope that JSF has developed a good access control mechanism in its current version.
    If any one has an example they can send it to me at
    [email protected] thanks.

    What exactly do you expect of an Access Control Mechanism? There is no such thing in JSF RI (Mojarra).
    Aren't you just talking about login/logout and checking for logged in user? Just a Filter and a database is sufficient for that. If you want to display/hide specific JSF components depending on the logged in user, you can use the 'rendered' attribute for that.

  • Version Control Mechanism for MDM

    Hi All,
    Can anybody please tell me how version control mechanism is used in SAP NW MDM.
    Thanks in advance
    Chandan

    Hi Chandan,
    If you are looking out for maintaining the history of all the Master records, then to the best of my knowledge its not available in MDM. MDM only deals with the latest version of records, however, you can still get back to the old values of records( in data base) , using the change tracking functionlity in MDM.
    However, MDM GUI's will show you only the latest updated values.
    To know about the change tracking and how it can be integrated with a UI, refer to the link below:
    How to Configure SAP MDM Change Tracker for Any Repository:
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/807861ac-f941-2c10-8f8d-c57f9d421b99
    Now, in case you are asking about the ALL VERSIONs feature of MDM, which we use in relation with Check out, then refer to the link below:
    How to Control Versioning in SAP MDM
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0b392b8-d4b7-2b10-a189-b3b61df3d9c4
    Hope it helps.
    Thanks and Regards
    Nitin jain

  • Delta control Mechanism

    Hi Experts,
    Can any one give full Information about Delta Control Mechanism:
    1.How it works?
    2.When Deltas are loaded from ODS to Cube,If I want to delete Delta Either from a Cube or from an ODS how it can be done.
    Thanx in Advance.

    Hi Dinesh,
    Plz go through the scenerio:
    1 I am Loading data to an ODS from R/3 this is further updated to a cube.
    2.First I loaded Full upload from R/3 to ODS and to cube also.
    3.Next I initialized the Delta .
    4.Now I am Extracting the deltas from R/3 to an ODS and finally to cube.
    5.Now my ODS and Cube both contain one full update initialization and D1,D2, and D3 Deltas.
    6.Now Delta D2 has gone wrong I need to Delete that from both ODS and Cube.---->Now My question is
    6.1 If I  Delete the D2 from an ODS whether all the deltas will be deleted..?
    6.2If all the Deltas are deleting then How can I take care of  this scenerio.
      In ODS manage Option the tab by name Data Mart Status of the request is availablel.After Deltas are loaded to cube this tab will get a tick mark.If I click on the tick mark it will take me to a screen Delta control Mechanism.
    How this  will help in this respect.
    Pradeep
      I

  • License Control Mechanism Implementation

    Hi,
    I am now trying to implement a license control mechanism for a web based application.
    I've studied several websites about this topic, but they did not provide much about the
    implementation details. So...could somebody please clarify for me
    where(websites, paper, books,API...) should I find the information.
    Thanx.

    Hi,
    Chapter 5: Software Piracy and Code Licensing Schemes of ISBN: 0072225653 has a nice and simple introductory chapter on this topic with full code examples.
    Regards,
    Duncan Eley.

  • Idoc control mechanism

    Hi,
    I have a situation.
    In processing the Idocs in background.
    can anyone give me some info how to control this processing.
    I am talking about PO  to SO automation between to SAP systems.
    lets assume,
    The background job which triggers to send idocs from sender system is At 12.00 noon.
    At 10 am , PO  is created and Idoc created for this PO and  sent to port.
    at 11 am  the same po has got some changes and the idoc also sent to port(FYI - I am not using Orderchange message type ) .
    At port i want to have the sequence of getting processed which is created at 11 am first than the one at 10 am.(the latest).
    i mean LIFO(last in first Out),in sending the idocs.
    any kind of input is greatly appreciated.
    Regards
    Message was edited by: Satheesh Mothukuri

    Hi Neela,
    Please expalin your problem in detail, If following is not useful:
    Master IDOC contains all the recods (control record, data record & status record).
    After that it will consult Distribution Model for the Sender and Receivers (Tcode -BD64) . Then  Communication Idocs are generated for the identified receivers. After that a function module will distribute that Communication Idocs. Communication idocs are generated befor getting into ALE layer.
    Ashven

  • Access Control Mechanism (data level security) not working properly

    Hi Experts,
    I have done datalevel security for groups by help of a database table. This table contains UserId, Dept. code, GroupName column. UserID are verified by LDAP server during logging into Dashboard. I have made two init blocks for GroupName and Dept.Code .
    Query is :
    SELECT 'Group', GroupName from TABLE
    Where
    UserId = ':USER'
    Similiar query is for Dept Code.
    There are two groups ; 1. CC_User 2. Full_User. I have applied filter in PERMISSIONS for CC_User on Fact table on Dept Code. So, user in this group may see data for Dept Code aligned to him in the table. All_User may see whole data for All Dept Codes as NO filter is applied on this group.
    Dept Code , UserId and GroupName are Varchar.
    Now problem is this when a user have membership of one group , it works fine. For CC_user it shows data for its Dept Code and All_user may see whole data.
    But When A user have permission of both the groups , only data related to CC_User group is visible. But, in my view , maximum permmision out of the both groups must be applied to the user if he belongs to more than one group.
    So , here , he must see whole data, as All_user group can see full data.
    Does least restrictive permmission happens in case of membership of more than one group in OBIEE.

    848839 wrote:
    Does least restrictive permmission happens in case of membership of more than one group in OBIEE.Indeed it does. The most restrictive filters get applied if a user belongs to multiple groups that have filters at various levels of data because its always an AND clause in the where condition. This is the sort of behavior in various tools I have seen apart from OBIEE.
    Hope this helps.
    Regards,
    -Amith.

  • HT203171 Track pad is erratic, sometimes scrolling without touching the pad. Also highlighting chunks of my Thesis with the touch of a finger. Frustrating because I have accidentally deleted during this out of control mechanism with the track pad.

    My Mac Book Air has developed an erratic process with the track pad. Sometimes the page scrolls without touching the pad, great chunks of my Thesis can be highlighted and sometimes I delete without realising. Does anyone else have this problem. The Tech wants to replace my track pad today, not sure what to do?

    See if any of these items apply:
    http://support.apple.com/kb/TS1449
    Regards,
    Captfred

  • ODI built in for checking the scenario status in case of re-run

    I have a package like this.
    scenario1 ----(ok)-----> scenario2 ----(ok)-----> scenario3
    When i run the package for the first time, assume that the scenario1 is executed successfully and scenario2 has failed.
    I correct the error in scenario2 and run the package again. I dont want the scenario1 to be executed as it has already been executed successfully. The execution should start from scenario2.
    Is there any Built in tool in ODI to achieve this.
    Thanks
    Pardha

    There is no default mechanism for doing this. I could be done by using your own run control mechanism, logging the execution etc and running with variables.

  • Regarding version control

    Hi,
    Could u please help me out in giving some ideas of version control in sap?
    First let me give some example as follows:
    First if i develop something in the developement server then later when i transfer to the QA server and later to production server then is there any change in version.
    Please give me a details of this issue????
    Thanks,
    Batista....

    hi priya,
    Version Control
    Version control is a mechanism that helps maintaining the revision history of a development resource and tracking the changes done to it. It defines a set of constraints on how a development resource can be changed. A development resource that complies with the constraints defined by the version control is called a versioned resource. When a versioned resource is modified or deleted, a new version is created for the resource. A unique sequence number is associated with each version of the resource created in a particular workspace. This sequence number identifies the order in which the versions were created in that workspace. The DTR graphically represents the relationship between the different versions of a versioned resource in the form of a version graph.
    For the representation of version graphs, this document follows the conventions shown in this figure.
    The figure shows the meaning of the symbols in the version graph.
    The following changes are tracked by the version control mechanism of the DTR:
    ·        Addition of the resource to the repository
    ·        Modification of the resource in the repository
    ·        Deletion of the resource from the repository
    In all the above cases, a new version of the resource is created.
    Production Delivery
    Packaging
    To deliver your product, you have first to package it. There are different packages you can use for shipping your product to your customers:
    ●      Software Component Archives (SCAs) – this is the standard way to deliver software for the SAP NetWeaver platform.
    ●      Software Deployment Archives (SDAs) – for top-level applications you can deliver only the executable part of the software. You can directly deploy the SDA file.
    ●      Public Part Archives (PPAs) in Development Component Interface Archives (DCIAs) – for reusable components (Java EE server libraries, Web Dynpro components, Visual Composer components and so on). You can deliver only the metadata of the components. DCIA can be included in SCA file too.
    How to do that?
    Using the command line tool provided with the SAP NetWeaver Composition Environment you can:
    ●      package a collection of components into an SCA including only the deployable archives. This is required if you do not want others to reuse the delivered components.
    ●      package a collection of components into an SCA including the deployable archives and the corresponding interface archives. This allows customers to develop against these components. Those customers can directly import the SCA into their own SAP NetWeaver Development Infrastructure (NWDI) or into an SAP NetWeaver Developer Studio local installation.
    ●      package the public parts of a component together with the required metadata into a DCIA (and further into an SCA).
    ●      include source code into an SCA.
    ●      unpack a deliverable archive and drop the result into an existing version control system for example, or directly import them into an existing Design Time Repository (DTR).
    Delivery of Source Code for Further Customization
    In addition, you can deliver source code to your customers to allow further customizing or add-on development. The deliverable archive may contain sources for:
    ●      individual development components (DCs).
    ●      a collection of development components, for example a whole software component (SC).
    Example
    A customer can add a new source compartment to an existing configuration, and then locate that compartment in the file system where it is accessible by the version control system in charge. Then he or she extracts the sources with the command line tool to the compartments root directory and refreshes the configuration in the SAP NetWeaver Developer Studio. The compartment tree is populated with components from the archive. Afterwards, the customer may put those components under version control. Deliverables that contain only individual components may be treated accordingly.
    This mechanism may also be used for other purposes, for example for setting up a simple backup and restore mechanism for components in Developer Studio, or sharing DC sources without having a central version control system: a developer may pack a compartment and store the resulting SCA on a central share or backup system. Another developer may take that SCA and import it.
    Limitations
    Note the following limitations connected with this kind of source code delivery:
    ●      There is no support for handling conflicts when different actors in a delivery chain develop independently in the same source code. You cannot prevent the customer from modifying delivered sources. When you ship a new version of the sources, there is no special support for updating and no support for merging the update with modifications done by the customer. You and the customer have to agree on a process how those conflicts are handled. For example, the customer can decide not to import the update you deliver directly into the active development line, but to unpack the delivered sources to some unconnected sandbox system and perform the required merges manually.
    ●      When you deliver source code to customers, it is important that you also deliver the required libraries and generators that are needed to build these sources. For example, it may be necessary to ship some archive compartments that contain used components.
    ●      There is no support for delivering deletions in a new version. If a source file was deleted, the customer has to manually ensure that the file is also deleted in the Developer Studio or source code management system.
    ●      If a customer prefers to work with the SAP NetWeaver Development Infrastructure (NWDI), this customer cannot directly import the source delivery package into the NWDI landscape. Between NWDI landscapes at different places, sources usually are exchanged through a more sophisticated export format that contains not only the pure source code, but also the versioning meta information of the exporting DTRs. This ensures that the importing repository can detect conflicts that arise due to modifications. If this versioning information is not available, the only way to import source deliveries is to unpack them to a file system and manually put them under version control with the Design Time Repository perspective of the Developer Studio. In case of an update, the customer would have to check out all affected files, merge them with the new versions from the source delivery, and finally check them in as a new version.
    More information: Composition Environment Command Line Tool
    see this url
    http://www8.sap.com/businessmaps/0134713B1D6046C59DE21DD54E908318.htm
    thanks
    karthik
    reward me if usefull

  • Is it possible to use the message control in R/3 to trigger a Proxy?

    Is it possible to use the message control in R/3 to trigger a Proxy? The message control mechanism has some advantages that I want to use E.g. repeat messages with RSNAST00, configuration instead of coding etc.. Does anyone uses SD Invoice message control mechanisms together with XI Proxies?
    Best regards,
    Matthias

    Hi,
    >>>configuration instead of coding etc
    currenly it is not possible to achive it without any coding
    >>>repeat messages with RSNAST00
    RSNAST00 is not used to repeat messages (idocs) but to send them
    try using standard IDOC in your sd invoice scenario
    if you don't want to do any coding
    Regards,
    michal

  • Security mechanism

    Hi Jhs team:
    We are planning using "oracle single sign-on" with "programmatic Dynamic Role Based Authorization" as our security control mechanism,
    The example in jhs_tutorial_3.pdf using Struts-Uix architecture and through ValidateLoginUser Action to wrapped Jhsuser objct,
    so, corresponding to our architecture , if we using SSO what is the best practice to put these code about wrapped Jhsuser object ?

    Ting Rung,
    See my reply on your other post about getting the username in an entity object.
    Steven Davelaar.
    JHeadstart Team.

  • Connections with third-party source control - will they be retained?

    I have RoboHelp installed on my hard drive. Company is swapping out my hard drive and I will need to reinstall it.
    All my RH projects are connected to a third-party source control mechanism (Visual SourceSafe).
    When I reinstall RH, will the connections to Visual SourceSafe be retained for all my projects?

    Hi,
    Just getting the latest version should work. I had a similar problem after a crash. RoboHelp can't detect which projects are in the VSS database, but after getting them, it all worked fine.
    You may however still want to backup your projects on a removable media to be sure. After the reinstall, just make sure the VSS client in installed, copy your project from the media and get back to work -- Could it be that simple? Well, it was for me.
    Greet,
    Willam

  • Message Flow Control

    Hello,
              as I read, WLS provides the possibility to slow down message producers when certain paging thresholds are exceeded. I'm not quite sure, if the slow down of a producer is tied to it's session lifecycle, i.e. if the producer is slowed down only within its current session - or are producers, which are created from a new session, also slowed down right from the start.
              It would be great if someone could answer that. Unfortunately I have found nothing about that in the documentation.
              Regards,
              Dirk

    Hello Tom.
              First of all, thank you for your quick response. This forum is really working!
              I searched for this topic in the documentation, but didn't find anything appropriate explaining the in-depth details of the JMS flow control mechanism. Therefore, I wrote a simple test case (implementation and server configuration is attached at the end of this message) to check the flow control handling for certain conditions. Since there is no API to access the flow control properties of a message producer, I measured the JMS throughput within the test case and tried to derive some conclusions.
              Regarding the server configuration:
              - I configured one JMS Server (Test_JMSServer) with one Queue (TestQueue). I enabled the message paging at server level, starting at 1000 messages (i.e. message threshold high). The lower message threshold was set to 100 messages. As a paging store a file store (TestPagingStore) was used.
              - Message thresholds at destination level was disabled.
              - I further configured a connection factory (TestConnectionFactory) for which I enabled the flow control with a max. throughput of 50 msg/sec and a min. throughput of 1 msg/sec. The flow interval was set to 60 seconds, the number of steps was set to 6.
              Regarding the test case:
              There are 3 scenarios within the test case. Within each scenario
              - first all messages are consumed from the queue,
              - then 1000 messages are send to the queue in a row, thereby measuring the throughput,
              - finally again 1000 messages are send to the queue in a row, thereby measuring the throughput.
              The scenarios differ in how the bulk-send operation is performed.
              - The first scenario (test_noReuse) creates a new connection for each message to be send.
              - The second scenario (test_sessionReuse) creates a new session at the start and reuses that session for sending the messages. A new sender is created of each message.
              - The third scenario (test_senderReuse) creates a new sender at the start and reuses that sender for sending the messages.
              Here are the results of the test:
              ==[test_noReuse]=======================
              Cleaned up <2000> messages.
              Throughput (messages/second): 51 [startMillis=1161249409788;endMillis=1161249429210;messagesSend=1000]
              Throughput (messages/second): 62 [startMillis=1161249429210;endMillis=1161249445304;messagesSend=1000]
              ==[test_sessionReuse]=======================
              Cleaned up <2000> messages.
              Throughput (messages/second): 231 [startMillis=1161249452773;endMillis=1161249457101;messagesSend=1000]
              Throughput (messages/second): 245 [startMillis=1161249457101;endMillis=1161249461179;messagesSend=1000]
              ==[test_senderReuse]=======================
              Cleaned up <2000> messages.
              Throughput (messages/second): 790 [startMillis=1161249467351;endMillis=1161249468616;messagesSend=1000]
              Throughput (messages/second): 1 [startMillis=1161249468616;endMillis=1161249999226;messagesSend=1000]
              To me it seems, that reusing a session has no effect on the message flow control if a new sender is created for sending a message. The only scenario where a throttling occurs (at least it seems to be that way) is the last scenario, where the sender is reused for all messages.
              So my conclusion is, that message control flow is superfluous to configure, as long as
              - you don't have a small amount of (batch) message producers, each of which reusing a single sender for each message,
              - or you have some cache of senders (or a single sender) in your application and reuse the cached senders for sending messages.
              The latter is, of course, strongly discouraged, since neither JMS sessions, nor senders are thread-safe. So you need to implement some thread-safe cache and also have to deal with re-creating stalled JMS resources within that cache -- a really non-trivial implementation. Further, if many different threads produce messages, the flow control has no (or at least few) effect on the message load in the server.
              I just ask myself, what the flow control was designed for, since the cases where it really applies are rather rare, IMHO. Or have I made some error in the test or in the server configuration?
              Best regards,
              Dirk
              The test class:
              <pre>
              import java.util.Hashtable;
              import javax.jms.BytesMessage;
              import javax.jms.JMSException;
              import javax.jms.Message;
              import javax.jms.Queue;
              import javax.jms.QueueConnection;
              import javax.jms.QueueConnectionFactory;
              import javax.jms.QueueReceiver;
              import javax.jms.QueueSender;
              import javax.jms.QueueSession;
              import javax.jms.Session;
              import javax.naming.InitialContext;
              import junit.framework.TestCase;
              public class JMSFlowControlTest extends TestCase {
              private int messagesThresholdHigh = 1000;
              private QueueConnectionFactory conFactory;
              private Queue queue = null;
              protected void setUp() throws Exception {
              super.setUp();
              InitialContext ctx = null;
              Hashtable ctxEnv = null;
              ctxEnv = new Hashtable();
              ctxEnv.put(
              InitialContext.INITIAL_CONTEXT_FACTORY,
              "weblogic.jndi.WLInitialContextFactory");
              ctxEnv.put(
              InitialContext.PROVIDER_URL,
              "t3://localhost:7001");
              ctx = new InitialContext(ctxEnv);
              this.queue = (Queue) ctx.lookup("jms/queue/TestQueue");
              this.conFactory = (QueueConnectionFactory)
              ctx.lookup("jms/factory/TestConnectionFactory");
              public void test_noReuse() throws Exception {
              System.out.println();
              System.out.println("==[" + this.getName() + "]=======================");
              this.cleanupQueue();
              System.out.println(
              this.bulkSendMessage(
              this.messagesThresholdHigh,
              1024));
              System.out.println(
              this.bulkSendMessage(
              this.messagesThresholdHigh,
              1024));
              public void test_sessionReuse() throws Exception {
              QueueConnection con = null;
              QueueSession session = null;
              System.out.println();
              System.out.println("==[" + this.getName() + "]=======================");
              try {
              con = this.conFactory.createQueueConnection();
              session =
              con.createQueueSession(false, QueueSession.AUTO_ACKNOWLEDGE);
              this.cleanupQueue();
              System.out.println(
              this.bulkSendMessage(
              session,
              this.messagesThresholdHigh,
              1024));
              System.out.println(
              this.bulkSendMessage(
              session,
              this.messagesThresholdHigh,
              1024));
              } finally {
              if (con != null) {
              try {
              con.close();
              } catch (JMSException e) {}
              public void test_senderReuse() throws Exception {
              QueueConnection con = null;
              QueueSession session = null;
              QueueSender sender = null;
              System.out.println();
              System.out.println("==[" + this.getName() + "]=======================");
              try {
              con = this.conFactory.createQueueConnection();
              session =
              con.createQueueSession(false, QueueSession.AUTO_ACKNOWLEDGE);
              sender = session.createSender(this.queue);
              this.cleanupQueue();
              System.out.println(
              this.bulkSendMessage(
              session,
              sender,
              this.messagesThresholdHigh,
              1024));
              System.out.println(
              this.bulkSendMessage(
              session,
              sender,
              this.messagesThresholdHigh,
              1024));
              } finally {
              if (con != null) {
              try {
              con.close();
              } catch (JMSException e) {}
              private Throughput
              bulkSendMessage(
              int messagesToSend,
              int messageSizeBytes)
              throws Exception {
              int messagesSend = 0;
              long startMillis = 0L;
              long endMillis = 0L;
              QueueConnection con = null;
              QueueSession session = null;
              QueueSender sender = null;
              startMillis = System.currentTimeMillis();
              for (int i = 0; i < messagesToSend; i++) {
              try {
              con = this.conFactory.createQueueConnection();
              session =
              con.createQueueSession(false, QueueSession.AUTO_ACKNOWLEDGE);
              sender = session.createSender(this.queue);
              sender.send(this.createMessage(session, messageSizeBytes));
              messagesSend++;
              } finally {
              if (con != null) {
              try {
              con.close();
              } catch (JMSException e) {}
              endMillis = System.currentTimeMillis();
              return new Throughput(startMillis, endMillis, messagesSend);
              private Throughput
              bulkSendMessage(
              QueueSession session,
              int messagesToSend,
              int messageSizeBytes)
              throws Exception {
              int messagesSend = 0;
              long startMillis = 0L;
              long endMillis = 0L;
              QueueSender sender = null;
              startMillis = System.currentTimeMillis();
              for (int i = 0; i < messagesToSend; i++) {
              sender = session.createSender(this.queue);
              sender.send(this.createMessage(session, messageSizeBytes));
              messagesSend++;
              endMillis = System.currentTimeMillis();
              return new Throughput(startMillis, endMillis, messagesSend);
              private Throughput
              bulkSendMessage(
              QueueSession session,
              QueueSender sender,
              int messagesToSend,
              int messageSizeBytes)
              throws Exception {
              int messagesSend = 0;
              long startMillis = 0L;
              long endMillis = 0L;
              startMillis = System.currentTimeMillis();
              for (int i = 0; i < messagesToSend; i++) {
              sender.send(this.createMessage(session, messageSizeBytes));
              messagesSend++;
              endMillis = System.currentTimeMillis();
              return new Throughput(startMillis, endMillis, messagesSend);
              private Message
              createMessage(
              Session session,
              int messageSizeBytes)
              throws Exception {
              BytesMessage message = null;
              message = session.createBytesMessage();
              message.writeBytes(new byte[messageSizeBytes]);
              return message;
              private void cleanupQueue() throws Exception {
              QueueConnection con = null;
              QueueSession session = null;
              QueueReceiver receiver = null;
              Message message = null;
              int messageCount = 0;
              try {
              con = this.conFactory.createQueueConnection();
              session =
              con.createQueueSession(false, QueueSession.AUTO_ACKNOWLEDGE);
              receiver = session.createReceiver(this.queue);
              con.start();
              do {
              message = null;
              message = receiver.receiveNoWait();
              if (message != null) {
              messageCount++;
              } while (message != null);
              } finally {
              if (con != null) {
              try {
              con.close();
              } catch (JMSException e) {}
              System.out.println("Cleaned up <" + messageCount + "> messages.");
              private static final class Throughput {
              private final long startMillis;
              private final long endMillis;
              private final int messagesSend;
              public Throughput(long startMillis, long endMillis, int messagesSend) {
              this.startMillis = startMillis;
              this.endMillis = endMillis;
              this.messagesSend = messagesSend;
              public int value() {
              long timeElapsedMillis = 0L;
              int timeElapsedSeconds = 0;
              int value = 0;
              timeElapsedMillis = this.endMillis - this.startMillis;
              if (timeElapsedMillis == 0) {
              timeElapsedMillis = 1;
              value = (int) ((this.messagesSend * 1000L) / timeElapsedMillis);
              return value;
              public String toString() {
              StringBuffer buffer = new StringBuffer();
              buffer
              .append("Throughput (messages/second): ").append(this.value())
              .append(" [startMillis=").append(this.startMillis)
              .append(";endMillis=").append(this.endMillis)
              .append(";messagesSend=").append(this.messagesSend)
              .append("]");
              return buffer.toString();
              </pre>
              The (partial) server configuration:
              <pre>
              <JMSConnectionFactory
              DefaultDeliveryMode="Non-Persistent"
              FlowMaximum="50"
              FlowMinimum="1"
              FlowSteps="6"
              JNDIName="jms/factory/TestConnectionFactory"
              Name="TestConnection Factory"
              Targets="SchufaDev"
              XAConnectionFactoryEnabled="true"/>
              <JMSFileStore Directory="filestore/paging" Name="TestPagingStore"/>
              <JMSServer
              MessagesPagingEnabled="true"
              MessagesThresholdHigh="1000"
              MessagesThresholdLow="100"
              Name="Test_JMSServer"
              PagingStore="TestPagingStore"
              Targets="SchufaDev">
              <JMSQueue
              CreationTime="1151395702234"
              JNDIName="jms/queue/TestQueue"
              JNDINameReplicated="false"
              Name="TestQueue"
              RedeliveryDelayOverride="5000"
              RedeliveryLimit="5"
              StoreEnabled="false"/>
              </JMSServer>
              </pre>

  • Change Control for package created in Modeler Perspective

    Hello,
    We have created a package through modeler perspective and developed couple of views in it.
    Want to enable change control for the same so we can transport only the view which has some changes in it.
    I have couple of questions on how to achieve the same,
    1) Is there any functionality to get this done through modeler perspective itself ? by this I mean we do not create repository and other stuff
    2) What will happen if I activate the change control mechanism and make change to one of the view? if any developer checks out the package to his local repository and makes changes , will he be prompted to create a Change ID during activation ?
    Thanks In Advance,
    MM.

    Hi Vikas, the projects used inside SSMS are not compatible with those in Visual Studio. So unfortunately you cannot open and manage them inside SSMS.
    The best practices we recommend are that you do your development inside Visual Studio and then publish the .dacpac file created on project build to your database - you can do this using VS, SSMS or SqlPackage.exe from the command line.
    If you do not want to use project-based development but do want TFS support for version changes, plenty of developers use SSMS for day-to-day changes and then use Schema Compare inside Visual Studio to sync those updates to a project and check these in.
    This gives you some level of change management and backup support. 
    Hope this helps,
    Kevin

Maybe you are looking for

  • Removing 'stuck' apps from an S60 3rdEd device

    Suppose you updated your S60 3rdEd device and after that you realized that your applications were not functioning, what would you do? The answer is simple - just re-install all these apps or games and the problem gets solved. But what about if you ca

  • PDF printing in 2.0

    Hi all, the following statement is from Carl Backstrom (and can be found here: Problems with PDF printing, using Apache FOP! Hello, Well I didn't get your email but I did run through and check my steps again because I'm making another viewlet for OC4

  • Labels not printing correctly

    I am printing with an HP OfficeJet 6500 on an iMac 27" Quad. These are product labels that will eventually get placed on glass jam jars. I purchased Labels # WRV5-3R_5923 from labelsbythesheet.com. They provided me with a Microsoft Word template. The

  • Digital Picture Frames again

    Does anyone have a Digital Photo frame that works well with Mac? My search revealed that the last time this was dealt with was about 6 months ago, and conclusion was none worked particularly well. I'm referring to relatively seamless transferring of

  • Opening iTunes occasionally caused blue screen and restart. Now it happens every time, my video card (GeForce GT 430) is up to date as is my iTunes (64 bit, Windows 7).

    It would only happen occasionally, now it's every time. The details of the error are: Problem signature:   Problem Event Name:    BlueScreen   OS Version:    6.1.7601.2.1.0.768.3   Locale ID:    2057 Additional information about the problem:   BCCode