Combining 2 databases

Hi,
I want to combine two databases.
The first one (A) contains our payables with due dates, amount etc derived from suppliers' invoices.
The other one (B) is an excel file and contains our scheduled payments to suppliers (for instance advanced payments for which the invoice has not been accounted for yet).
What I need is a report showing the two info, both the accounted for invoices and the forecasted payments. I have been trying to link the two databases with no success.
What happens is that .. if the supplier from database B is not present in database A it woulnd't return any result. I have tried all link options.. nothing happened. If the supplier from database A is present it would return the amount stored in database B for each record of that supplier stored in database A.
What do you think I should do?
thanks a lot.
Cya.

Hi, 
Usually when you link two different databases Crystal will do a Left Outer Join.  If you have a Record Selection Formula that will force an Equal Join. 
There's no easy way around this.  I've done different things depending on the report.  In most cases, I brought in all the records, grouped on a unique field and hid the detail section using only the Group Footer to display my details.  This isn't always a good way since you could potentially have millions of records to go through unneccesarrily. 
Another way would be to create a SQL Command for your second table, do your filtering in there then use that instead of the table.  Trade off here would be performance. 
Good luck,
Brian

Similar Messages

  • Please Help: Problem - Combining database fields in a text object

    Hello
    Thank you for viewing my thread and any helpful feedback you may provide. I am having trouble with the "Combining database fields in a text object" tutorial in the Help under: Quick Start\Quick Start for new users\Combining database fields in a text object.
    After following the steps and previewing, I see the last name, first name stacked on top of each other in preview and design view. I don't know why this is happening. Can anyone point me in the right direction so the text is displayed the same way as it is in the Help?
    Thanks
    M

    Hi, 
    I don't think the field is actually in the text object but just stacked on top of it. 
    When you double-click on the text object, you the cursor should be inside it now.  Is the database field in there?  If not, then you can drag and drop it into the text object where you want it to go. 
    You will know if it's going in because the edit cursor will show you where in the text object it will drop. 
    The other way to do it is to drop the database field on the report somewhere and copy it to the clipboard and paste it into the text object. 
    Good luck,
    Brian

  • Combining Databases

    I have several distinct Oracle 9.2.0 database instances which share a common database schema but each distinct database instance holds data for a distinct geographical region.
    Aside from the actual data, the database instances are identical in every other way: same Oracle version (9.2.0), same database schema etc.
    There is nothing in the database schema to identify the geographical region from which data originated.
    The separate database instances are managed by separate Oracle 9.2.0 RDBMS running on separate platforms.
    The separate database instances are to be merged into a single Oracle 9.2.0 database instance managed by a single Oracle 9.2.0 RDBMS running on a single platform.
    It is a requirement to be able to identify data for a particular geographical region in the single merged database instance.
    In order to be able to do this the existing database schema must be modified so that an extra "geographical region" identifying table column is added to particular tables in the database schema in order to identify data as belonging to a geographical region.
    There are also existing applications which write and read data into and from the databases. These applications will need to be modified if the existing database schema is modified.
    I am after any advice or information about similar database instance mergers.
    Separate database instances for each geographical region could be continued but managed by a single Oracle 9.2.0 RDBMS running on a single platform. This would seem to be a much easier option in terms of database schema and application modification and rework but possibly resulting in lower performance. Advice or information about this option is also sought.
    Thanks,
    Brett Morgan.

    I don't see why you'd need to combine databases to do that, either.

  • How can I combine two databases, from two computers, to have one combined database of messages?

    My old XP computer recently died and I had to build a new Windows 8.1 machine. While I was down I used a laptop as a temporary replacement. Now my new machine is running fine and receiving e-mail, but I now have two databases--one on the new machine and one on the laptop. Both are based on a recent backup, so they are lengthy--except that the new machine's database has a hole in it for the period I was on the laptop and the laptop's database also has gaps. How can I combine two databases into one that includes the messages from both machines?
    Thanks in advance,
    profsimonie

    Thanks for your reply. My profile folder did not contain any MBOX files. I found them in another folder on another drive. The Import-Export tools simply transferred each folder to the current one as a sub-folder. Then I had to use ctrl-a to select everything in the folder and move them manually to the current folder (such as inbox). Then I could use the other utility you mentioned to remove the duplicates. This had to be done, one folder at a time, to combine each folder. It worked, but I had about thirty-five folders to deal with. The whole process took most of two days to complete. I wish there was a simple way to blend everything together in one action, but I could not find an add-on that would do this.
    Frank Simonie

  • Combine database fields as parameter

    Post Author: teddybear
    CA Forum: General
    I am building a financial report. Among other, the user want to select the report based on financial year and period. For example, from 2006 period 6 to 2007 period 2.
    If I am going to use normal parameter, then records that having year equal to 2006 or 2007 AND period 2 to 6 will be selected. This is not right. Example, record with year 2006 and period 2 will be selected.
    My question, how can I create a parameter with combine field.

    Post Author: nscheaff
    CA Forum: Data Connectivity and SQL
    In the following SQL statement the Database is parameterized...select * from {?Database}.dbo.YourTable This works using and ODBC connection against a Microsoft SQL Server 2000 database.Is that what you are looking for?

  • Are mutliple database calls really significant with a network call for a web API?

    At one of my employers, we worked on a REST (but it also applies to SOAP) API. The client, which is the application UI, would make calls over the web (LAN in typical production deployments) to the API. The API would make calls to the database.
    One theme that recurs in our discussions is performance: some people on the team believe that you should not have multiple database calls (usually reads) from a single API call because of performance; you should optimize them so that each API call has only
    (exactly) one database call.
    But is that really important? Consider that the UI has to make a network call to the API; that's pretty big (order of magnitude of milliseconds). Databases are optimized to keep things in memory and execute reads very, very quickly (eg. SQL Server loads and
    keeps everything in RAM and consumes almost all your free RAM if it can).
    TLDR: Is it really significant to worry about multiple database calls when we are already making a network call over the LAN? If so, why?
    To be clear, I'm talking about order of magnitude -- I know that it depends on specifics (machine hardware, choice of API and DB, etc.) If I have a call that takes O(milliseconds), does optimizing for DB calls that take an order of magnitude less, actually
    matter? Or is there more to the problem than this?
    Edit: for posterity, I think it's quite ridiculous to make claims that we need to improve performance by combining database calls under these circumstances -- especially
    with a lack of profiling. However, it's not my decision whether we do this or not; I want to know what the rationale is behind thinking this is a correct way of optimizing web API calls.

    But is that really important? Consider that the UI has to make a network call to the API; that's pretty big (order of magnitude of milliseconds). Databases are optimized to keep things in memory
    and execute reads very, very quickly (eg. SQL Server loads and keeps everything in RAM and consumes almost all your free RAM if it can).
    The Logic
    In theory, you are correct. However, there are a few flaws with this rationale:
    From what you stated, it's unclear if you actually tested / profiled your app. In other words, do you actually know that
    the network transfers from the app to the API are the slowest component? Because that is intuitive, it is easy to assume that it is. However, when discussing performance, you should never assume. At my employer, I am the performance lead. When I first joined,
    people kept talking about CDN's, replication, etc. based on intuition about what the bottlenecks must be. Turns out, our biggest performance problems were poorly performing database queries.
    You are saying that because databases are good at retrieving data, that the database is necessarily running at peak performance, is being used optimally, and there is nothing that can be done
    to improve it. In other words, databases are designed to be fast, so I should never have to worry about it. Another dangerous line of thinking. That's like saying a car is meant to move quickly, so I don't need to change the oil.
    This way of thinking assumes a single process at a time, or put another way, no concurrency. It assumes that one request cannot influence another request's performance. Resources are shared,
    such as disk I/O, network bandwidth, connection pools, memory, CPU cycles, etc. Therefore, reducing one database call's use of a shared resource can prevent it from causing other requests to slow down. When I first joined my current employer, management believed
    that tuning a 3 second database query was a waste of time. 3 seconds is so little, why waste time on it? Wouldn't we be better off with a CDN or compression or something else? But if I can make a 3 second query run in 1 second, say by adding an index, that
    is 2/3 less blocking, 2/3 less time spent occupying a thread, and more importantly, less data read from disk, which means less data flushed out of the in-RAM cache.
    The Theory
    There is a common conception that software performance is simply about speed.
    From a purely speed perspective, you are right. A system is only as fast as its slowest component. If you have profiled your code and found that the Internet is the slowest component, then everything else is obviously not the slowest part.
    However, given the above, I hope you can see how resource contention, lack of indexing, poorly written code, etc. can create surprising differences in performance.
    The Assumptions
    One last thing. You mentioned that a database call should be cheap compared to a network call from the app to the API. But you also mentioned that the app and the API servers are in the same LAN. Therefore, aren't both of them comparable as network calls? In
    other words, why are you assuming that the API transfer is orders of magnitude slower than the database transfer when they both have the same available bandwidth? Of course the protocols and data structures are different, I get that, but I dispute the assumption
    that they are orders of magnitude different.
    Where it gets murkey
    This whole question is about "multiple" versus "single" database calls. But it's unclear how many are multiple. Because of what I said above, as a general rule of thumb, I recommend making as few database calls as necessary. But that is
    only a rule of thumb.
    Here is why:
    Databases are great at reading data. They are storage engines. However, your business logic lives in your application. If you make a rule that every API call results in exactly one database call, then your business logic may end up in the database. Maybe that
    is ok. A lot of systems do that. But some don't. It's about flexibility.
    Sometimes to achieve good decoupling, you want to have 2 database calls separated. For example, perhaps every HTTP request is routed through a generic security filter which validates from the DB that the user has the right access rights. If they do, proceed
    to execute the appropriate function for that URL. That function may interact with the database.
    Calling the database in a loop. This is why I asked how many is multiple. In the example above, you would have 2 database calls. 2 is fine. 3 may be fine. N is not fine. If you call the database in a loop, you have now made performance linear, which means it
    will take longer the more that is in the loop's input. So categorically saying that the API network time is the slowest completely overlooks anomalies like 1% of your traffic taking a long time due to a not-yet-discovered loop that calls the database 10,000
    times.
    Sometimes there are things your app is better at, like some complex calculations. You may need to read some data from the database, do some calculations, then based on the results, pass a parameter to a second database call (maybe to write some results). If
    you combine those into a single call (like a stored procedure) just for the sake of only calling the database once, you have forced yourself to use the database for something which the app server might be better at.
    Load balancing: You have 1 database (presumably) and multiple load balanced application servers. Therefore, the more work the app does and the less the database does, the easier it is to scale because it's generally easier to add an app server than setup database
    replication. Based on the previous bullet point, it may make sense to run a SQL query, then do all the calculations in the application, which is distributed across multiple servers, and then write the results when finished. This could give better throughput
    (even if the overall transaction time is the same).
    TL;DR
    TLDR: Is it really significant to worry about multiple database calls when we are already making a network call over the LAN? If so, why?
    Yes, but only to a certain extent. You should try to minimize the number of database calls when practical, but don't combine calls which have nothing to do with each other just for the sake of combining them. Also, avoid calling the database in a loop at all
    costs.

  • Restoring database using C sharp console application(SQl server 2008 R2)

    Hi All,
    I want to take backup and restore database from one server to another through C sharp Console Application.
    My Backup is working fine but restoring is not working and throwing below error:
    An unhandled exception of type 'Microsoft.SqlServer.Management.Smo.FailedOperationException' occurred in Microsoft.SqlServer.SmoExtended.dll
    Database name is fine while restoring.
    Please give me some ideas in resolving this.
    Thanks in advance.
    blrSvsTech

    Hi Olaf,
    I used Replace database option.
    My code is something like this.
      public void Restoredb(string serverName, string useraame, string password)
                ServerConnection conn;
                if (useraame == " ") // for Windows Authentication
                    SqlConnection sqlCon = new SqlConnection(@"Data Source=" + serverName + @";Integrated Security=True;");
                    conn = new ServerConnection(sqlCon);
                else // for Server Authentication
                    conn = new ServerConnection(serverName, useraame, password);
                Server srv = new Server(conn);
                Database database = srv.Databases["db_Miller"];
                string dbPath = Path.Combine(database.PrimaryFilePath, "db_Miller.mdf");
                string logPath = Path.Combine(database.PrimaryFilePath, "db_Miller.ldf");
                Restore restore = new Restore();
                BackupDeviceItem deviceItem = new BackupDeviceItem(path, DeviceType.File);
                restore.Devices.Add(deviceItem);
                restore.Action = RestoreActionType.Database;
                restore.ReplaceDatabase = true;
                restore.NoRecovery = true;
                restore.SqlRestore(srv);  ------------- Getting error in this line.
                database = srv.Databases["db_Miller"];
                database.SetOnline();
                srv.Refresh();
                database.Refresh(); 
    Thanks in advance.
    blrSvsTech
    Hi blrSvsTech,
    You haven’t specified the restore destination database, please try the following codes:
    public void Restoredb(string serverName, string useraame, string password)
    ServerConnection conn;
    if (useraame == " ") // for Windows Authentication
    SqlConnection sqlCon = new SqlConnection(@"Data Source=" + serverName + @";Integrated Security=True;");
    conn = new ServerConnection(sqlCon);
    else // for Server Authentication
    conn = new ServerConnection(serverName, useraame, password);
    Server srv = new Server(conn);
    Database database = srv.Databases["db_Miller"];
    string dbPath = Path.Combine(database.PrimaryFilePath, "db_Miller.mdf");
    string logPath = Path.Combine(database.PrimaryFilePath, "db_Miller.ldf");
    Restore restore = new Restore();
    BackupDeviceItem deviceItem = new BackupDeviceItem(path, DeviceType.File);
    restore.Devices.Add(deviceItem);
    restore.Action = RestoreActionType.Database;
    restore.ReplaceDatabase = true;
    restore.NoRecovery = true;
    restore.database = srv.Databases["db_Miller"];
    restore.SqlRestore(srv); ------------- Getting error in this line.
    database = srv.Databases["db_Miller"];
    database.SetOnline();
    srv.Refresh();
    database.Refresh();
    If the issue occurs again, please post all the error message here for analysis.
    For more detail information, you can refer to he following link:
    Restore.SqlRestore Method
    http://technet.microsoft.com/en-us/library/microsoft.sqlserver.management.smo.restore.sqlrestore.aspx?cs-save-lang=1&cs-lang=csharp#code-snippet-1
    Best Regards,
    Allen Li
    Allen Li
    TechNet Community Support

  • Unknown user name/password combination

    Hi SAPDBV Gurus,
    1. I did a full Live cache backup of system A  (LCZ)
    2. Did a restore with initialization to target. (LCR)
    3. I checked the system logs snm21 :- Unknown user name/password combination, and also in live cache logs
    4. When I tried to do a load systab with dbmcli> , I see the message :- -24907,ERR_DBAWRONG: wrong SYSDBA
    I could not create the DBADMIN user or reset the password, it gives a user_put error.
    I also found that in the source the sysdba user was DBADMIN (LCZ), and the target was SYSDBA (LCR)
    Thanks,
    Naren
    > Unknown user name/password combination
    Database error -4008 at CON

    Hello,
    Please update with additional information:
    -> What are the versions of the source & target liveCache instances?
    -> Please update with outputs of the following commands:
    dbmcli -u control,<controlpw> -d LCR
    <enter>
    Dbmcli on LCR>db_state
    Dbmcli on LCR>sql_connect
    Dbmcli on LCR>sql_execute select * from users
    Dbmcli on LCR>load_systab -u DBADMIN,DBADMIN -ud DBADMIN
    Dbmcli on LCR>exit
    -> What is the dbm users? < if the liveCache instances were not created with control user >
    -> Please overview the SAP note 877203.
    Thank you and best regards, Natalia Khlopina

  • Service Manager, Configuration Manager and Orchestrator 2012 Database

    Hi,
        I have installed Configuration Manger 2012 R2(CM) on a system and i want to install Service Manager 2012 R2(SM), and was wondering if it would be possible to point to the database of CM while installing the SM 2012 R2 or do i need to install
    a separate database.
        And similarly if i want to install the Orchestrator would it be possible to point to the database of CM while installing the Orchestrator 2012 R2 or do i need to install a separate database.
    Thanks

    Firstly, each System Center component (SCOM, SCCM, etc.) has their own database(s). I believe that you are asking about using the same SQL server, or possibly the same SQL server instance.
    The answer to your question would depend on a few things, like if you are trying to do this in a lab/POC, or in Production. Here is an article about coexistence of System Center components: http://technet.microsoft.com/en-us/library/jj851033.aspx.
    Although it applies to System Center 2012 SP1, I would believe the same can be applied to the R2 version.
    Hopefully that gets you started in the right direction. 
    Also, note what it says in this article (http://www.derekseaman.com/2013/06/teched-2013-system-center-config-mgr-2012-sp1.html), in the SQL Guidelines,
    specifically "Do NOT combine databases from other system center products. Don’t build a giant SQL cluster for all system center products."

  • My thoughts on testing DocumentDB

    Despite knowing DocumentDB won't be an option yet for my needs because of the lack of OrderBy and other known limitations in the Preview, I wanted to try it out and run some basic query tests against it to see what's already possible, how it performs, where
    it lacks features and if it would make sense to consider DocumentDB as a future replacement for my current combined database Azure (SQL Server + Table Storage) solution.
    I want to share my findings as a feedback on this preview.
    My scenario
    While the big picture is much more complex, for this post and my DocumentDB test, I reduced my app functionality to it's very basic requirement: Users can subscribe to news channels and have all articles of their subscriptions shown in a combined list. There
    are thousands of news channels available and users may subscribe to 1 to 100s or even 1000s of them, while 1-100 is the common range of subscriptions. The app has tagging for read/unread articles, starred articles and everything can also be organized in folders
    and users can filter their article lists by these tags - but I left all of these complexities out for now.
    DocumentDB architecture
    One collection for News Channels, one collection for Articles. I decided to split the channels from articles as there are some similarities in column names and this would have made issues in index design. I imported around 2.000 NewsChannel rows from my
    SQLDB and around 3 million articles, filling up the Articles collection to nearly 10 GB of data.
    A NewsChannel document looks like this:
    id - I took the int value from my SQL database for this
    Name
    Uri
    An Article document looks like this:
    id - I also took the int value from my SQL database for this
    NewsChannelId - in SQL DB, the foreign key from the NewsChannel table
    Title
    Content
    Published - DateTime converted to Epoch
    I put range indexes on id and Published as most of the time, I'd query for ranges of documents (greater than an id, newer than a specific published date, ...). I also excluded some columns from indexing, like Content in the articles document.
    Test 1 - Get newest 50 articles for a single news channel
    SELECT TOP 50 * FROM ArticlesWHERE NewsChannelId == [50]ORDER BY id DESC
    I knew this would fail due to the lack of OrderBy. I tried to find a solution by custom indexes, but there is no way to define an index to be organized in descending order so newest entries would always be returned first. This would be enough as I not really
    need ascending orders for articles, so this would have made up for the lack of OrderBy. But it does not seem to be possible.
    Result: Impossible
    Test 2 - Get newest 50 articles for all subscribed news channels of a user
    SELECT TOP 50 * FROM ArticlesWHERE NewsChannelId IN ([1, 6, 100, 125, 210, ...])ORDER BY id DESC
    This would be the most used query and it would have been very interesting to see how this could perform, but due it's similarity to Test 1, it's also not possible to do it. But a variant of it will be described in the next test (3).
    Result: Impossible
    Test 3 - Get any articles newer than a given article from all subscribed news channels of a user
    This was the first test where I hoped to get some results. Each article document has a range index on id and its Published date, so this should be fast and nice. I seemed to have failed to create the range index for id correctly as DocumentDB complained
    about id not being a range index - that sucked because to fix this I would have to recreate the whole database and re-import all data. But luckily, the index on Published was created correctly and it would do for testing this kind of query just as fine as
    the id.
    SELECT * FROM ArticlesWHERE NewsChannelId IN [1, 6, 100, 125, 210, ...]AND Published > [someDate]
    Unfortunately, I found out there is no "contains" query supported in DocumentDB that would work like a WHERE IN query in SQL. But if I want to query articles for all subscribed channels, I will have to pass a list of NewsChannel IDs to the
    query. That was really a surprise to me as something like this seems just as much as a base functionality like OrderBy.
    Result: Impossible
    Test 4 - Get any articles newer than a given article for a single news channel
    Just as test 4, but only for 1 news channel - so finally here, DocumentDB will support my needs.
    SELECT * FROM ArticlesWHERE NewsChannelId == [id]AND Published > [someDate]
    And yes, this works, and the performance seems OK. But to my surprise, even if this just returns 5 documents and the query is well supported by the range index on Published, this has extremely high RU costs - depending on the query, somewhere between 2.500
    and 6.000 in my tests - which would mean with 1 CU, I already will be throttled for such a simple query.
    Result: Possible, quite fast but insanely high RU costs
    Test 5 - Get a single article from a News Channel
    As expected, this works like a charm. Fast and with 1 RU cost per query.
    Result: Works great.
    Other stuff I noticed:
    For my scenario, I see no way to scale my DocumentDB. I already reached the limit of a single collection with only a fragment of all my data. I would need to do the partitioning myself, for example by having a single collection for each NewsChannel, like
    I did in TableStorage where the NewsChannelId is the partition key - but due to the collection number limitations and even more due to the limitied query capabilitis INSIDE a single collection, I see now way how I could do performant queries if I would need
    to query multiple, maybe even hundreds of different collections in one query.
    Even if the space limit of a single collection would be raised to terabytes of data, I see the issue that I will run into serious performance problems as, as I understand, a collection can always be only on a single node. To support more load, I will be
    required to split my data over multiple collections to have multiple nodes, but then again, this would not support my query needs. Please correct me if I'm seeing something wrong here.
    Wrap up
    Seeing that even my most basic query needs cannot be supported by DocumentDB right now, I'm a bit disappointed. OrderBy is coming, but it won't help without WHERE IN queries and even with them, I still don't know if this is something that will perform good
    in combination and what, in such cases, the RU costs will look like if simple range queries with a small amount of documents returned already cost that much.
    I'm looking forward what's happening next with DocumentDB, and I really hope I can replace my current solution with DocumentDB at some point, but currently, I don't see it. A good fit for me would be MongoDB, but it's not PAAS and it's hard and resource-intensive
    to host ... so DocumentDB looked very nice at first sight. I really hope those issues will be resolved, and they will be resolved soon.
    b.

    Hi Aravind,
    thank you very much for your detailed response.
    Test 1: That's a good idea for a workaround, although it would get complicated when I want the top 50 documents from all subscribed news channels, which can be 100 or more (Test 2). The index documents can also get pretty large which might bring me to the
    limit of a single document, needing to split it on multiple documents for a single news channel. However, for a proof of concept implementation with DocumentDB, this will do fine. I might try that :)
    Test 2: Yes, but the ORs are limited to a maximum of 5 (?) currently, so not really an option as I need more most of the time.
    Test 3: I will have a look at this and see how that performs!
    Test 4: I used a classic UNIX epoch timestamp (seconds since 1970) and I also used a precision of 7 for the index. See below the code I used to create the index. So I think this should be OK. However, I'm glad to share the details of my account and
    a sample query so you can have a look for yourself. I will contact you by Mail with details
    articleCollection.IndexingPolicy.IncludedPaths.Add(new IndexingPath
    IndexType = IndexType.Range,
    Path = "/\"InsertedEpoch\"/?",
    NumericPrecision = 7
    As for partitioning - thanks for the article. For me, a fan-out on read strategy would be required if I would do my partitioning by News Channel ID ... but that's what giving me headaches. Given that it is not uncommon a user of my app has 100 or more
    subscriptions, I would need to issue 100 parallel queries. I tried something like that for Azure Table Storage and found it to be a performance nightmare. That's why I currently use Table Storage as a pure document store but still do all computations of the
    requested articles in SQL Server. But yes, I might have to put more thought into that and see how I can squeeze out the performance I need. Because SQL Server is fine and I can do a lot with indexes and indexed views to support my query scenarios - but
    it has also it's scalability limits and the reason it still works good is that my App is in a testing/beta state and does not have the amount of data and users it will have when it is finally live. That's the reason I am searching for a NoSQL solution like
    DocumentDB that should support my needs, mainly for scale-out, better.
    Thanks again for your response and your suggestions, with that, I might be able to do a basic proof of concept implementation that supports my core features for some testing with DocumentDB and see how it's doing.
    I will contact you per mail for the RU test data
    Happy new year! :)
    Bernhard

  • Verity - files and db query combo

    Is it possible to have both a search of some files and a
    query of a database table in the same cfcollection? If so, guidance
    on how to do this?

    I'm not sure if this answers your question, but it's possible
    to combine database and files while indexing.

  • Alarm and Event Query Very Slow

    Group,
    I am using the DSC Alarm and Event Query vi to pull data from the SQL database (not the Citadel).  I have a filter set up that specifies the "Alarm Area" and the start and stop dates with maximun results set to 22.  These dates are set to only pull the last 24 hrs.  This vi will return around 10 to 20 entries out of perhaps 80-90 total events in the last 24 hours in the database.   This database is ~2M in size.   I have to set the timeout to almost 10 min for this vi to not produce a timeout error.  The results returned are correct but it just seems that the time to run this vi is excessive.  It is quering a database on the same system that the quering vi is running on.  Should I expect better preformance?
    Thanks
    Todd

    Verne,
    I have boiled down the code to this attachment.  This query took almost 7 min to return 22 results from a database size that is listed as 2.09877E+6.  I have also tried the Alarms and Event Query to spreadsheet vi also and it takes the same amount of time.  I am wondering if I place the Alarms and Events into the same (Citidel) database that the traces are going if it would be much faster.  I seem to get trace data back very fast.  If I recall correctly I seperated the alarms from the trace because I was having some sort of problem accessing the alarm data in the combined database...but that was several labVIEW versions ago.  Anyone else having this problem?
    Thanks
    Todd
    Attachments:
    Generate Alarm Log General Test.vi ‏19 KB

  • Problem in creating db link

    Hi,
    1. I have created db link using as follows (ORACLE 8.1.6):
    CREATE PUBLIC DATABASE LINK TORAMESH CONNECT
    TO rem_user IDENTIFIED BY pass USING 'ST1';
    2. I am having string ST1 created.
    3. When I connect using string I am getting connecting to
    remote database, but the problem comes when
    I connect to local database and tries and query using
    @TORAMESH, then I am getting following error :
    ORA-02019: connection description for remote database not found.
    Please help me to solve the problem.
    Thanks in advance...
    Ramesh

    I think that it was caused by the improper dblink name which do not give the complete description of your remote database.
    See the information below from http://otn.oracle.com/docs/products/oracle8i/doc_library/817_doc/server.817/a85397/sql_elem.htm#27762
    , you may find what you should used for the dblink name.
    The name that you give to a database link must correspond to the name of the database to which the database link refers and the location of that database in the hierarchy of database names. The following syntax diagram shows the form of the name of a database link:
    database.domain
    database should specify name portion of the global name of the remote database to which the database link connects. This global name is stored in the data dictionary of the remote database; you can see this name in the GLOBAL_NAME view.
    domain should specify the domain portion of the global name of the remote database to which the database link connects. If you omit domain from the name of a database link, Oracle qualifies the database link name with the domain of your local database as it currently exists in the data dictionary.
    connect_descriptor allows you to further qualify a database link. Using connect descriptors, you can create multiple database links to the same database. For example, you can use connect descriptors to create multiple database links to different instances of the Oracle Parallel Server that access the same database.
    The combination database.domain is sometimes called the "service name".

  • 2 Servers install on one MaxDB

    I've installed MaxDB 7.6 with OS Windows 2008 r2 and created a instance SDB on <local>database.
    It works pretty good.
    Under this configuration I'd like to add another Server(name ContentServer) on existing Server as above.
    It means there might be  2 different Servers ( one <local> and ContentServer) on one maschine.
    Is it a possible to configure like this?
    The history about that is following:
    It was installed 2 Content server System  with same instance name "SDB" a couple of years ago.
    I want to merge this 2 old systems into one new MaxDB with OS 2008 r2. The first SDB was installed with restore without any problem. But it comes to question, how add to 2. system on existing server with same instance name- we don't want to install as 2. Instance with a different name.
    Who can give me hints about that or recommand something a resanalbe solution?
    Thanks a lot in advance,
    Gauguin

    Hi,
    I am not sure I have understood your question correctly, so let me summarize:
    You have Content Server A with CS database A (SDB) and Content Server B with CS database B (SDB).
    Now you want to merge both database instances A and B into one SDB instance.
    And then access this new combined database from both Content Servers?
    Technically, you could use the MaxDB 'loader'-tool to export data from one database into the other, but I think this must be handled/discussed on application level first. What if both databases share documents with the same id, but different data?
    I would recommend to open a OSS message on the BC-SRV-KPR-CS queue first to have the Contenet Server application colleagues to assess this scenario first.
    Feel free to correct me though, if I misunderstood you here...
    Kind regards,
    Thorsten

  • Is there two layers of audio on imovie or idvd

    I just finished A 300 picture presentation on Vietnam for a friend using keynote(excellent program). Each picture has a different show time(10 - 20 seconds) because of amount of text on it and info in the picture. We want to include back ground music for the presentation but also do a voice over on some to explain whats going on in some of the pictures. Does either Imovie or Idvd have two audio layers or do I have to use final cut which I can but then I have to go back through all the stills and reset the play time. Thanks karl

    You are mindless!!! Just kidding.
    You are forgetting the Database is an operating system.  The operating system (database) has Data, Users, and Account, Methods.
    There is an association between a User and an Account.  When a user logs onto any operating system there is data/properies that is owned by the user.  The properties are obtained by the operating system from the Account when you log.
    The OOP objects in a database are tables.  Normally OOPs refers to class object, but a database table can be considered a class type object.  The association in a database is where two or more tables share a common field.  You can use a SQL
    Join statement to combine database tables by there common fields.
    jdweng

  • Backup in Organizer 10 on Mac

    I'll describe both my intermediate and final issue with Organizer 10 for Mac:
    1.  Intermediate: After optimizing and error checking an +11k photo database, I used the backup function to create a backup database on the hard drive of the computer currently hosting the database. Note that there are no videos in my database. The backup didn't take too long and I received a message that the backup was complete. When I immediately tried restoring the database, my reward was the following error message:
    2.  Final: I'm actually trying to move/copy the database from my wife's slower Mac to my faster Mac. My plan is to have a copy of the tags I've put into the existing database onto my machine and then put all of the original images on a share drive that neither my wife nor I can write to easily. We then do our own edits and albums using separately licensed copies of Elements. It's less a issue of sharing our work than it is sharing our computers. With Elements on her computer, my wife wants me out of the way so that she can use her own machine....
    3.  Chat: I was on chat today and advised to purchase a usb drive of some sort to transfer databases between machines. I'll think about doing that after I get the backup function to work properly on the same machine.

    1.  Thank you for your interest.
    2.  The point of saving and restoring to the same drive is to test the backup function. If the backup doesn't work on the same drive, it probably won't work on a different drive.
    3.  A thumb drive strikes me as an inefficient way to network. I'll stay away from them as long as I can. If you're happy with it, I'm not trying to change your mind.
    3.  Editing original photos is kind of an unphotoshop organizer thing to do anyway. It seems to me that edited file copies from somebody else's database is going to mess with what Organizer wants to think is in the directory. What I was proposing would have us saving our edits in different file folders on different computers until we were ready to share them.
    4.  Of course, what that leaves is the issue of how we share tags for the combined database.... If I made the database writable to both my wife and I, I think that Organizer is going to look at tags from each of us as if they were foreign objects. I could be wrong but I don't think that Organizer Elements is well designed for sharing a common database of photos. I've neither the time nor the money to do the full Photoshop.

Maybe you are looking for

  • Where is the  configuration for revaluation of FX to GL

    we have a number of transactions in which a supplier in Germany packs product for us and sends that product directly to our Atlanta plant for storage. They invoice us in Palm Beach,FL for the product and we charge it to Manufacturing. That is an inte

  • Ability to view icloud data/Reprocussions of deleting icloud account?

    I have been an icloud user since the beta was released and I have only been uploading my contacts, calenders, reminders, bookmarks, and documents and data to the icloud. I don't have that much content within any of those catagories to use up a lot of

  • Premiere startup not working

    I just installed Premiere CS6 fresh from the creative cloud. It downloaded and installed fine. I opened it once, and then the successive times after that, the startup screen would freeze while loading the scripts. Since then, I have uninstalled ALL a

  • IDOC types in SAP ECC

    Hi Gurus, 1) 3rd party->PI->SAPECC We have a requirement that the tenant refunds coming from 3rd party system should be sent to SAP ECC system. To send this messgae to SAP ECC, is there any separate idoc format for this tenant refunds. Should I use a

  • I can't get onto iTunes store.

    I can't get onto iTunes store when I open it from the latest version of the download.  It has only started happening with the latest version and it doesn't show anything at all.  No error messages or signs or anything.  I have even tried reinstalling