Javascript usage with sql

Hi,
I need to know how to use the javascript with sql to insert update and delete operation
Need more explanation
Help me

user8818950  wrote:
Hi,
I need to know how to use the javascript with sql to insert update and delete operation
Need more explanation
Help meThen you should perhaps ask on a Javascript forum as that's a javascript question you're asking, not an SQL or PL/SQL question.

Similar Messages

  • Understanding replica volume and recovery point volume usage with SQL Express Full Backup

    I am running some trials to test DPM 2012 R2's suitability for protection a set of SQL Server databases and I am trying to understand what happens when I create a recovery point with Express Full Backup.
    The databases use simple recovery model and in the tests I have made so far I have loaded more data into the databases between recovery points since that will be a typical scenario - the databases will grow over time. The database files are set to autogrowth
    by 10%
    I have been looking at the change in USED space in the replica volume and in the recovery point volume after new recovery points and have a hard time understanding it.
    After the first test where data was loaded into the database and an Express Full Backup recovery point was created, I saw an increase in used space in the replica volume of 85 Gb and 29 GB in the recovery point volume. That is somewhat more than I think
    the database grew (I realize that should have monitored that, but did not), but anyway it is not completely far out.
    In the next test I did the same thing except I loaded twice as much data into the database.
    Here is where it gets odd: This causes zero increased usage in the replica volume and 33 GB increased use in the recovery point volume.
    I do not understand why the replica volume use increases with some recovery points and not with others.
    Note that I am only discussing increased usage in the volumes - not actual volume growth. The volumes are still their original size.
    I have been using 3-4 days on the test and the retention period is set to 12 days, so nothing should be expired yet.

    Hi,
    The replica volume usage represents the physical database file(s) size. The database file size on the replica should be equal to the database file size on the protected server.  This is both .mdf and .ldf files.  If when you load data
    into the database and you overwrite current tables versus adding new ones, or if there is white space in the database files and the load simply uses that white space, then there will not be any increase in the file size, so there will not be any increase
    in the replica used space.
    The recovery point volume will only contain delta changes applied to the database files.  As the changed blocks overwrite the files on the replica during express full backup, VSS (volsnap.sys) driver copies the old blocks about to be overwritten
    to the recovery point volume before allowing the change to be applied to the file on the replica. 
    Hope this helps explain what you are seeing.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • RAID configuration, better to logically split RAID5 or RAID1 with SQL?

    I want to setup a new SQL server with SQL 2008 R2.  I have to use this version due to the application support behind it.  I have a question regarding best practice with disk setup on a physical server. 
    Here is my proposed setup, but the application owner had some questions.
    RAID1 - 2 x 146GB drives - OS drive
    RAID1 - 2 x 146GB drives - tempdb
    RAID1 - 2 x 146GB drives - translog
    RAID5 - 3 x 900GB drives - database location
    The owner was wondering if I could logically split up the RAID5 into 2 logical partitions with their own drive letters on the OS.  They want a separate volume for SQL flat file backups.  Of course this is possible, but I was wondering what option
    they would be better off with.  Here are my two ideas, which one would be better?
    Option 1: Split RAID5 physical into 2 logical, have 1 logical be used for the database, and 1 used for the backup.
    Option 2: Split one of the RAID1 pairs into two logical volumes, and have the tempdb on one of these logicals, and the translog on the other.  This would then free up one pair of RAID1 drives to be used for the flat file backup.
    Which of these two options would be a better configuration?  Assuming that the application owner does not wish to purchase 2 more drives.  

    Selecting appropriate RAID disk type must not go by standard rule like below ones:
    User DB Data Disk: Raid 5
    User DB Log Disk: Raid 10
    Sys DB Disk: Raid 5
    Sys TempDB Data Disk: Raid 10
    Sys TempDB Log Disk: Raid 10
    Local Backup: Raid 5 is also ok if you can't afford Raid 10.
    Best way will be to monitor the environment for disks performance and check is its capacity to support your application. In one of our case, we had to go for Raid 10 for even database data disk too as Disks Usage was like that.
    Usually keep data and backup on separate disks will be much better as risk is there if disk goes down then your data and backup both gone, So no for option 1. Again same answer for Option 2.
    Go for separate disks and if you have to have these files on same disk to start with then you can start but you can decide latter on by seeing usage pattern and same approach goes for any data\log disk, but for backup please be double sure of.
    Santosh Singh

  • Using Entity Framework with SQL Azure - Reliability

    (This is a cross post from http://stackoverflow.com/questions/5860510/using-entity-framework-with-sql-azure-reliability since I have yet to receive any replies there)
    I'm writing an application for Windows Azure. I'm using Entity Framework to access SQL Azure. Due to throttling and other mechanisms in SQL Azure, I need to make sure that my code performs retries if an SQL statement has failed. I'm trying to come up with
    a solid method to do this.
    (In the code below, ObjectSet returns my EFContext.CreateObjectSet())
    Let's say I have a function like this:
      public Product GetProductFromDB(int productID)
         return ObjectSet.Where(item => item.Id = productID).SingleOrDefault();
    Now, this function performs no retries and will fail sooner or later in SQL Azure. A naive workaround would be to do something like this:
      public Product GetProductFromDB(int productID)
         for (int i = 0; i < 3; i++)
            try
               return ObjectSet.Where(item => item.Id = productID).SingleOrDefault();
            catch
    Of course, this has several drawbacks. I will retry regardless of SQL failure (retry is waste of time if it's a primary key violation for instance), I will retry immediately without any pause and so on.
    My next step was to start using the Transient Fault Handling library from Microsoft. It contains RetryPolicy which allows me to separate the retry logic from the actual querying code:
      public Product GetProductFromDB(int productID)
         var retryPolicy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>(5);
         var result = _retryPolicy.ExecuteAction(() =>
               return ObjectSet.Where(item => item.Id = productID).SingleOrDefault;
         return result;
    The latest solution above is described as ahttp://blogs.msdn.com/b/appfabriccat/archive/2010/10/28/best-practices-for-handling-transient-conditions-in-sql-azure-client-applications.aspx Best Practices for Handling Transient Conditions in SQL Azure Client
    Application (Advanced Usage Patterns section).
    While this is a step forward, I still have to remember to use the RetryPolicy class whenever I want to access the database via Entity Framework. In a team of several persons, this is a thing which is easy to miss. Also, the code above is a bit messy in my
    opinion.
    What I would like is a way to enforce that retries are always used, all the time. The Transient Fault Handling library contains a class called ReliableSQLConnection but I can't find a way to use this with Entity Framework.
    Any good suggestions to this issue?

    Maybe some usefull posts
    http://blogs.msdn.com/b/appfabriccat/archive/2010/12/11/sql-azure-and-entity-framework-connection-fault-handling.aspx
    http://geekswithblogs.net/iupdateable/archive/2009/11/23/sql-azure-and-entity-framework-sessions-from-pdc-2009.aspx

  • Howto implement hand-rolled locking with SQL?

    Hi there,
    For now I used java.util.Concurrent's classes for locking critical sections which should only be modified/read by a client at a time. However now one customer would like to cluster our app and I wonder wether the locking could also be done with SQL.
    The reason why we don't use some high-level SQL constructs is that our product only uses a really very small subset of SQL.
    Any ideas wether/how double-checked locking could be implemented using SQL?
    Thank you in advance, lg Clemens

    Here is a link to a nice discussion on your topic.
    It might or might not help, but it did appear
    pertinent to your questions by talking about
    alternative methods and pros and cons of
    auto-incrementing.Ooooh are we back on this topic again? Delightful. And just in time for the holiday when I shall have the time to compose lengthy replies on the subject.
    Since the last time this subject came up (and I recall having some discussion with duffymo about it) I have been doing further thinking about it all and here are my thoughts.
    Design Stage
    A bit of a rehash of what I said before. I think it is key during the database (note database and not application) design phase to design without using auto-generated keys. Your database integrity, that is the integrity of each table and the integrity of each relationship between tables must be able to "stand up" without the use of these keys.
    In general proper database design is, I believe, a learned craft. Since the amount of poor designs, at least what I come across, seems so prolific I think a good rule of thumb would be to apply the above rule to one's design stringently as a good practice that will create proper designs.
    Creation and Deployment Phase
    When you have a solid design then before creating and deploying the database design is a good time to look at where generated keys might be appropriate. Generally speaking I recommend using them for the following reasons
    1) Ease of development. Let's face it joining key to key on an numeric column is much easier than joining on multiple variable columns.
    2) Performance. The time to access a multiple key field with variable length columns is going to be different than a single key numeric field. Now this item was refuted in the article WorkForFood linked above but I think there is an issue to consider here all the same. It all depends on the usage.
    I will certainly give you that database indexes are wonderful things and in most cases the performance differences between searching by a multiple column key vs a single column key are going to be hardly measurable. But what about updating? And what about the size of your table.
    If the database is small or the database is largely for analysis and reporting then performance is (probably) not going to be an issue. However for a large scale transaction database I think one would be foolish to dismiss the performance impact out of hand.
    3) Long-term flexibility. This was a point raised by duffymo in the last go around and it's certainly worth considering. The general theory is this, when you use non-generated keys you are locking business logic into your database design. If you need to make changes later for business reasons you are up the proverbial creek.
    Personally, while I think the point is valid I am not overly sold on this one. I think there is only so much "planning against future changes" that one can and should do. I would rather see the needs of the current design met in an effective fashion before considering this. So I guess to me if all other things are equal this a reason to use generated keys.
    Rules for Using Generated Keys
    The two rules I would like to see people use when using generated keys are as follows.
    1) Always create a constraint on the real key. Again and again I encounter systems where this has not been done. You might as well not have any keys at all with this kind of disaster. If you can't create a constraint then I think you have to go back to the design stage and rethink this.
    2) Never use generated keys when one or more foreign keys form part of the primary key. Really this advice applies exclusively to tables that form many-to-many relationships. Auto-generated keys should never ever be used in tables like this. It's just a mess waiting to happen.
    Of course the key could be several foreign keys that are each auto-generated keys in their own tables...
    Other stuff I am going to refute in the linked article that doesn't fit anywhere else
    Well mainly I don't like the concept of application generated keys. Which is discussed in some of the replies. To me that is a mix of all the cons of using generated keys and the cons of using natural keys all at the same time while adding a new level of stuff tied into your application logic that will make life more difficult in the end. I just don't like it.

  • LcsCDR & QoEMetrics Purge and Usage Summary SQL Agent Jobs

    Hello.
    On our SQL instance that hosts the LcsCDR & QoEMetrics databases I have the following SQL Server Agent Jobs:
    LcsCDR_Purge
    LcsCDR_UsageSummary
    QoEMetrics_Purge
    QoEMetrics_UsageSummary
    I'd like to know if anyone else has these same jobs on their instances as it's unclear looking at the job descriptions, when they were created, and the code they run if these are part of the core Lync system or something that has been set up randomly
    afterwards.
    One thing I do know is that these jobs don't work correctly when you use database mirroring. I will elaborate on this if/when I can find out any more information about them first.
    Thanks

    Thank you for the reply.
    The problem with these jobs is that when they attempt to run against a database which is a mirror, it causes the job to fail. That would not seem to be a big deal right, except our policy (and many other SQL shops I imagine) is to send e-mail notifications
    each and every time a SQL agent job fails on any server, not just those associated with Lync.
    As it stands these jobs are not configured on either the principal or mirror server to notify anyone as I’d get an e-mail every hour on the hour from the mirror server telling me that a job has failed  (even though it didn’t actually need to do anything
    ) giving the impression something is wrong somewhere. This is quite a common trap to fall into with SQL Agent jobs and database mirroring.
    Remember that because as Lync uses automatic failover either server could be the mirror at any given time so there is no mileage in setting the e-mail notifications on just the server that is currently the principal.
    What this job should do then is when it attempts to run on the mirror, check if it’s the mirror first and if it is then do nothing instead of failing each time.  This will enable e-mail notifications to be set up on the jobs on both servers, so when
    the job fails we know there is potentially a genuine problem to be investigated.
    Each job contains two steps, I’ll use the QoEMetrics_UsageSummary as the example.
    The code in step 1 is this, which causes the job to error out if it is attempting to run against the mirror.
    declare @_MirroringRole int
    set @_MirroringRole = (
    select mirroring_role
    from sys.database_mirroring
    where database_id = DB_ID('QoEMetrics')
    if (@_MirroringRole is not NULL and @_MirroringRole = 2) begin
    raiserror('Database is mirrored', 16, 1)
    end
    else begin
    print ('Database is not mirrored')
    end
    I re-wrote this job to run in one-step like this, which runs from the master database, which as I mentioned above doesn’t make the job error out when it realises it’s trying to run against a mirror.
    DECLARE @_MirroringRole INT
    SET @_MirroringRole = ( SELECT mirroring_role
    FROM sys.database_mirroring
    WHERE database_id = DB_ID('QoEMetrics'))
    -- if this is not a mirrored database or it's the principal, attempt to run usage summary op
    IF (@_MirroringRole IS NULL OR @_MirroringRole = 1)
    BEGIN
    DECLARE @_Sql NVARCHAR(4000)
    SET @_Sql =
    'USE [QoEMetrics];
    DECLARE @_UpdatingSkipped INT
    EXEC dbo.RtcRegularMaintainDatabase @_UpdatingSkipped = @_UpdatingSkipped output'
    EXEC (@_Sql)
    END
    ELSE -- if the database is a mirror, don't do anything
    BEGIN
    PRINT 'there is nothing to do, the database is a mirror'
    END
    The same logic can be applied to the other 3 jobs and I was intending to suggest it as a connect item, except there doesn’t seem to be a section for Lync so I will contact product support and see what they think.
    Thanks again.

  • Query with SQL-SP gives error, if not used for few days

    Hi All,
    I have observed that if we do not use some of queries (which usage SQL SPs) for few days, then it stops working.
    But when we go in SQL and execute the SP, the query in SAP starts working, without making any change either in query or the SQL-SP.
    Can anybody throw light on this ? I guess it has some connection with SQL-SP behaviour.
    Thanking you in advance ,
    Samir Gandhi
    Edited by: Rui Pereira on May 1, 2009 1:28 PM

    Hi Gordon,
    Please note the function of the SP is to bring selected data from tables, for example to bring Purchase details (I have copy pasted the SP at bottom of this message).
    These are not SP_notif...
    These SPs are called from the SBO query.
    Once we execute the SP in SQL, then it starts working with SBO also.
    set ANSI_NULLS ON
    set QUOTED_IDENTIFIER ON
    GO
    ALTER Procedure [dbo].[pGetPurchaseRegister]
         @StartDate datetime,
         @EndDate  datetime
    as
    if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[tmpPurchaseReg1]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
         drop table tmpPurchaseReg1
    if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[tmpPurchaseReg2]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
         drop table tmpPurchaseReg2
    SELECT T1.LineNum,T0.DocNum AS 'Document Number', T0.DocDate AS 'Posting Date', T0.CardCode AS 'Vendor Code',
           T0.CardName AS 'Vendor Name', T0.NumAtCard as 'Vendor Ref. No', T0.VatSum AS 'Total Tax', T0.DocTotal AS 'Document Total' ,
           T1.AcctCode AS 'Account Code',T1.LineTotal AS 'Basic Amount', T1.ItemCode AS 'Item No.', T1.Dscription AS 'Item/Service Description',
           T2.SuppCatNum, T1.INMPrice AS 'Item Cost', T1.Quantity AS 'Quantity',T3.ItmsGrpNam AS'ItemGroup'
    into tmpPurchaseReg1
    FROM  [dbo].[OPCH] T0
          INNER  JOIN [dbo].[PCH1] T1  ON  T1.DocEntry = T0.DocEntry
          INNER  JOIN [dbo].[OITM] T2  ON  T1.ItemCode = T2.ItemCode
          INNER  JOIN [dbo].[OITB] T3  ON  T2.ItmsGrpCod = T3.ItmsGrpCod
          --WHERE T0.DocDate >= CONVERT(DATETIME, [%0], 112)   AND  T0.DocDate <= CONVERT(DATETIME, [%1], 112)  
         WHERE T0.DocDate >=@StartDate  AND  T0.DocDate <=@EndDate
    ORDER BY T0.DocNum
    select  * into tmpPurchaseReg2 from tmpPurchaseReg1
    declare @Total int
    declare @TmpCardCode varchar(200)
    declare @DocNum int
    declare  PurchseRegister_Cursor cursor LOCAL for
         select  [Document Number] from tmpPurchaseReg1 group by [Document Number] having count([Document Number]) > 1 order by [Document Number]
    open PurchseRegister_Cursor
                            fetch next from PurchseRegister_Cursor into @DocNum
                            while @@fetch_status = 0
                            begin
                                        begin
                                drop table tmpPurchaseRegTemp
                                 select top 1 * into tmpPurchaseRegTemp from tmpPurchaseReg2 where [Document Number] = @DocNum 
                             select @TmpCardCode =[LineNum]  from tmpPurchaseRegTemp     
                             update tmpPurchaseReg2 set [Vendor Name] = '',[Total Tax] = 0,[Document Total] = 0
                                                   where  [Document Number] = @DocNum  and [LineNum] <> @TmpCardCode
                                        end
                                       fetch next from PurchseRegister_Cursor into @DocNum   
                            end
    close PurchseRegister_Cursor
    deallocate PurchseRegister_Cursor
    BR
    Samir Gandhi

  • How to make column headers in table in PDF report appear bold while datas in table appear regular from c# windows forms with sql server2008 using iTextSharp

    Hi my name is vishal
    For past 10 days i have been breaking my head on how to make column headers in table appear bold while datas in table appear regular from c# windows forms with sql server2008 using iTextSharp.
    Given below is my code in c# on how i export datas from different tables in sql server to PDF report using iTextSharp:
    using System;
    using System.Collections.Generic;
    using System.ComponentModel;
    using System.Data;
    using System.Drawing;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using System.Windows.Forms;
    using System.Data.SqlClient;
    using iTextSharp.text;
    using iTextSharp.text.pdf;
    using System.Diagnostics;
    using System.IO;
    namespace DRRS_CSharp
    public partial class frmPDF : Form
    public frmPDF()
    InitializeComponent();
    private void button1_Click(object sender, EventArgs e)
    Document doc = new Document(PageSize.A4.Rotate());
    var writer = PdfWriter.GetInstance(doc, new FileStream("AssignedDialyzer.pdf", FileMode.Create));
    doc.SetMargins(50, 50, 50, 50);
    doc.SetPageSize(new iTextSharp.text.Rectangle(iTextSharp.text.PageSize.LETTER.Width, iTextSharp.text.PageSize.LETTER.Height));
    doc.Open();
    PdfPTable table = new PdfPTable(6);
    table.TotalWidth =530f;
    table.LockedWidth = true;
    PdfPCell cell = new PdfPCell(new Phrase("Institute/Hospital:AIIMS,NEW DELHI", FontFactory.GetFont("Arial", 14, iTextSharp.text.Font.BOLD, BaseColor.BLACK)));
    cell.Colspan = 6;
    cell.HorizontalAlignment = 0;
    table.AddCell(cell);
    Paragraph para=new Paragraph("DCS Clinical Record-Assigned Dialyzer",FontFactory.GetFont("Arial",16,iTextSharp.text.Font.BOLD,BaseColor.BLACK));
    para.Alignment = Element.ALIGN_CENTER;
    iTextSharp.text.Image png = iTextSharp.text.Image.GetInstance("logo5.png");
    png.ScaleToFit(105f, 105f);
    png.Alignment = Element.ALIGN_RIGHT;
    SqlConnection conn = new SqlConnection("Data Source=NPD-4\\SQLEXPRESS;Initial Catalog=DRRS;Integrated Security=true");
    SqlCommand cmd = new SqlCommand("Select d.dialyserID,r.errorCode,r.dialysis_date,pn.patient_first_name,pn.patient_last_name,d.manufacturer,d.dialyzer_size,r.start_date,r.end_date,d.packed_volume,r.bundle_vol,r.disinfectant,t.Technician_first_name,t.Technician_last_name from dialyser d,patient_name pn,reprocessor r,Techniciandetail t where pn.patient_id=d.patient_id and r.dialyzer_id=d.dialyserID and t.technician_id=r.technician_id and d.deleted_status=0 and d.closed_status=0 and pn.status=1 and r.errorCode<106 and r.reprocessor_id in (Select max(reprocessor_id) from reprocessor where dialyzer_id=d.dialyserID) order by pn.patient_first_name,pn.patient_last_name", conn);
    conn.Open();
    SqlDataReader dr;
    dr = cmd.ExecuteReader();
    table.AddCell("Reprocessing Date");
    table.AddCell("Patient Name");
    table.AddCell("Dialyzer(Manufacturer,Size)");
    table.AddCell("No.of Reuse");
    table.AddCell("Verification");
    table.AddCell("DialyzerID");
    while (dr.Read())
    table.AddCell(dr[2].ToString());
    table.AddCell(dr[3].ToString() +"_"+ dr[4].ToString());
    table.AddCell(dr[5].ToString() + "-" + dr[6].ToString());
    table.AddCell("@count".ToString());
    table.AddCell(dr[12].ToString() + "-" + dr[13].ToString());
    table.AddCell(dr[0].ToString());
    dr.Close();
    table.SpacingBefore = 15f;
    doc.Add(para);
    doc.Add(png);
    doc.Add(table);
    doc.Close();
    System.Diagnostics.Process.Start("AssignedDialyzer.pdf");
    if (MessageBox.Show("Do you want to save changes to AssignedDialyzer.pdf before closing?", "DRRS", MessageBoxButtons.YesNoCancel, MessageBoxIcon.Exclamation) == DialogResult.Yes)
    var writer2 = PdfWriter.GetInstance(doc, new FileStream("AssignedDialyzer.pdf", FileMode.Create));
    else if (MessageBox.Show("Do you want to save changes to AssignedDialyzer.pdf before closing?", "DRRS", MessageBoxButtons.YesNoCancel, MessageBoxIcon.Exclamation) == DialogResult.No)
    this.Close();
    The above code executes well with no problem at all!
    As you can see the file to which i create and save and open my pdf report is
    AssignedDialyzer.pdf.
    The column headers of table in pdf report from c# windows forms using iTextSharp are
    "Reprocessing Date","Patient Name","Dialyzer(Manufacturer,Size)","No.of Reuse","Verification" and
    "DialyzerID".
    However the problem i am facing is after execution and opening of document is my
    column headers in table in pdf report from
    c# and datas in it all appear in bold.
    I have browsed through net regarding to solve this problem but with no success.
    What i want is my pdf report from c# should be similar to following format which i was able to accomplish in vb6,adodb with MS access using iTextSharp.:
    Given below is report which i have achieved from vb6,adodb with MS access using iTextSharp
    I know that there has to be another way to solve my problem.I have browsed many articles in net regarding exporting sql datas to above format but with no success!
    Is there is any another way to solve to my problem on exporting sql datas from c# windows forms using iTextSharp to above format given in the picture/image above?!
    If so Then Can anyone tell me what modifications must i do in my c# code given above so that my pdf report from c# windows forms using iTextSharp will look similar to image/picture(pdf report) which i was able to accomplish from
    vb6,adodb with ms access using iTextSharp?
    I have approached Sound Forge.Net for help but with no success.
    I hope anyone/someone truly understands what i am trying to ask!
    I know i have to do lot of modifications in my c# code to achieve this level of perfection but i dont know how to do it.
    Can anyone help me please! Any help/guidance in solving this problem would be greatly appreciated.
    I hope i get a reply in terms of solving this problem.
    vishal

    Hi,
    About iTextSharp component issue , I think this case is off-topic in here.
    I suggest you consulting to compenent provider.
    http://sourceforge.net/projects/itextsharp/
    Regards,
    Marvin
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to do it with SQL Loader

    All,
    I have two tables HEADER_TABLE and LINE_TABLE. Each header record can have multiple line records. I have to load data from a flat file to these tables.Flat file can have two types of records. H-Header, L-Line. It looks as follows.. Each H record can have multiple corresponding L records
    H..........
    L.......
    L......
    L......
    H.........
    L.......
    L......
    L......
    I have HEADER_ID column in HEADER_TABLE and HEADER_ID, LINE_ID columns in the LINE_TABLE.
    While loading data using SQL Loader, I need to generate HEADER_ID and LINE_ID values as follows and load them.
    H..........<HEADER_ID = 1>
    L....... <HEADER_ID = 1><LINE_ID = 1>
    L...... <HEADER_ID = 1><LINE_ID = 2>
    L...... <HEADER_ID = 1><LINE_ID = 3>
    H......... <HEADER_ID = 2>
    L....... <HEADER_ID = 2><LINE_ID = 4>
    L...... <HEADER_ID = 2><LINE_ID = 5>
    L...... <HEADER_ID = 2><LINE_ID = 6>
    Is it possible to do this with SQL LODER?
    I tried to do this with sequences. But it loaded the tables as follows.
    H..........<HEADER_ID = 1>
    L....... <HEADER_ID = 1><LINE_ID = 1>
    L...... <HEADER_ID = 1><LINE_ID = 2>
    L...... <HEADER_ID = 1><LINE_ID = 3>
    H......... <HEADER_ID = 2>
    L....... <HEADER_ID = 1><LINE_ID = 4>
    L...... <HEADER_ID = 1><LINE_ID = 5>
    L...... <HEADER_ID = 1><LINE_ID = 6>
    Thanks
    Ketha

    Morgan,
    Examples given in the link are quite generic and I have tried them. But my requirement is focused on generating header_id and line_id values as i have described. It seems that SQLLDR scans all records for a particular WHEN clause and insert them into the specified table. I think that if SQLLDR is made to read recod in the data file sequentially, this can be done.
    ANy idea of how to make SQLLDR read the records from the file sequentially?
    Thanks
    Ketha

  • Sharepoint Foundation 2010 compatibility with SQL Server 2014

    Due to a requirement to use Sharepoint without Active Directory access.  I am trying to find explicit confirmation whether Sharepoint Foundation 2010 is compatible with SQL Server 2014.  I have seen that Sharepoint Server 2010 and 2013 are compatible
    with SQL Server 2014 as of CU 1 (released in April).  But there is no mention of Sharepoint Foundation.
    This chart is great except it doesn't mention whether the Foundation versions are supported.
    http://msdn.microsoft.com/en-us/library/gg492257.aspx
    Thanks.

    Note that that is for the Reporting Services component, but not necessarily for the SQL Database Engine component (which comes from the SharePoint Product Group, rather than the SQL Product Group). I haven't seen any confirmation that SharePoint 2010 is
    supported at all with SQL 2014 as a Database Engine.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Can I use Same ADF application with SQL server 2008 as well as Oracle 11g?

    Hi ,
    I have created a application in ADF(Using J-Developer). I used SQL server 2008 as Database Provider. I have same Database in Oracle 11g. I want to connect my appliction to Oracle 11g database. I changed connection and connected to Oracle. But, when I tried to save or delete any data from front end, it gave error. Any solution on this?
    Thanks

    Hi,
    I have created Entity object and View object for every page in my application. That objects created from sql server database. Application is working fine with sql server. But when connected with Oracle 11g. It is giving following errors:
    On clicking search control it gives: ORA 00923 From keyword not found where expected.
    On searching with perticular field it gives: ORA-01722: Invalid Number.
    On Clicking save buttonit gives: ORA-00933: SQL command not properly ended.
    These are error messages from IntegratedWebLogicServer-Log:
    java.sql.SQLSyntaxErrorException: ORA-00923: FROM keyword not found where expected
    <QueryCollection> <buildResultSet> [3929] java.sql.SQLSyntaxErrorException: ORA-00923: FROM keyword not found where expected
    Caused by: java.sql.SQLSyntaxErrorException: ORA-00923: FROM keyword not found where expected
    <DCBindingContainer> <cacheException> [3947] * * * BindingContainer caching EXCEPTION:oracle.jbo.SQLStmtException
    <DCBindingContainer> <cacheException> [3948] java.sql.SQLSyntaxErrorException: ORA-00923: FROM keyword not found where expected
    <DCBindingContainer> <cacheException> [3949] * * * BindingContainer caching EXCEPTION:oracle.jbo.SQLStmtException
    <DCBindingContainer> <cacheException> [3950] java.sql.SQLSyntaxErrorException: ORA-00923: FROM keyword not found where expected
    <DCBindingContainer> <cacheException> [3951] * * * BindingContainer caching EXCEPTION:oracle.jbo.SQLStmtException
    <DCBindingContainer> <cacheException> [3952] java.sql.SQLSyntaxErrorException: ORA-00923: FROM keyword not found where expected
    <DCBindingContainer> <cacheException> [3953] * * * BindingContainer caching EXCEPTION:oracle.jbo.SQLStmtException
    <DCBindingContainer> <cacheException> [3954] java.sql.SQLSyntaxErrorException: ORA-00923: FROM keyword not found where expected
    java.lang.NullPointerException

  • Comparing the BOM usage with the actual usage for materials.

    Hi All
    My client want to compare the BOM usage with the actual usage and have a report for this within a period? More specified they want to calculate the BOM usage, based on the requirement quantity from confirmation - but with out the scrap %. mulitplied with the confimed quantity of the header material.
    The actual usage should be based on  goods issuing from stock, either as goods issue to order (backflushing) or as goods issue to cost center. 
    I havent been able to indentiy a standard report for doing this - does some of you know a standard report?
    -  I was thinking of the following method:
    Look at  tabel RESB and compare it  with MSEG, but I have some diferent problems with this.
      - In resb there is no qty before scrap. 
      - The data amount from MSEG, is so huge so it is not possible to make a data search for a whole period (eg. month)
    Br. M

    There is one std report MCRX which gives a comparison of quanity in order and actual consumption,
    if it doesnt meet your need, you may have to create a custom report.
    logic can be find the quantites as per BOM for the produced quantities
    issued quantities to the order
    tables you may need are AUFK AUFM
    AFKO, STAS STPO
    MSEG, MKPO
    JEST ( if you want to filter using order status)

  • Report region with sql query

    Hi
    I have a report region with sql query. There are two regions in page. On top of page, user enters data and after that second region show enterd data which is report region
    based on sql query.
    Now,when this page is opned, as user had not entered any thing, report region shows "no data found" message. Is it possible to remove that message or
    may i conditionally disaply report region i.e if data is inserted then only report region is dispalyed.
    Thanks

    >
    i was trying with select count(1) in expression.
    >
    Just for your info, COUNT() (without any grouping obviously) with always return 1 row. If there are no result for the query then 1 row will be returned with a value of zero - so there are results returned.
    Secondly, why were you using COUNT(1) rather than COUNT(*)? That is is faster is a very common misconception and not true. If you need to know how many rows have been returned, use COUNT(*). If you need to take nulls into account (ie. not include them in your count) then use COUNT(column_name) and name the column that you are interested in specifically.
    Cheers
    Ben

  • DI Server with SQL Server 2008

    Hello.
    In the company where I work we develop an application that uses B1WS.
    In the development environment had the following characteristics:
    SAP Business One 2007 SP01, PL 09
    SQL Server 2005 Developer Edition
    Windows XP Professional SP 3
    SAP Business One Web Services 1.0
    At the time we applied to the deploying the SAP Business One version 8.8, but I had a serious problem. The test environment has the following characteristics:
    SQL Server 2008 Enterprise Edition
        SAP Busines One 8.8 * PL 10
    Windows Server 2008
    The problem is that when trying to connect to the DI Server generates the following messages:
    The TAO NT Naming Service has stopped. (WithOut DBUser and DBPassword)
    Unable to access SBO-Common database Login (With DBUser and DBPassword)
    I tested the connection directly to the DI Server disponbile by example in the SDK and it generates the same error. For the SAP interface is connected correctly.
    DI Server allows connect with SQL Server 2008?
    Messagge:
    <DatabaseServer>mv-05910a0046</DatabaseServer>
    <DatabaseName>Prueba</DatabaseName>
    <DatabaseType>dst_MSSQL2008</DatabaseType>
    <DatabaseUsername>sa</DatabaseUsername>
    <DatabasePassword>12345</DatabasePassword>
    <CompanyUsername>manager</CompanyUsername>
    <CompanyPassword>12345</CompanyPassword>
    <Language>ln_English</Language>
    <LicenseServer>mv-05910a0046:30000</LicenseServer>
    Thanks
    Edited by: Andres Naranjo on May 14, 2010 12:47 AM
    Edited by: Andres Naranjo on May 14, 2010 3:56 PM
    Edited by: Andres Naranjo on May 14, 2010 3:58 PM

    Had the same error
    'The TAO NT Naming Service has stopped.'
    Was using the external IP address for the LicenceServer - which is on the same server as SAP DIserver and B1WS.
    I changed to using localhost:30000 and that solved the issue.
    I suspect if I had used the internal IP it would have worked as well (ie the one you get from ipconfig) but as it works I'm not going to test that.
    HTH

  • JDBC with SQL Server

    Hi,
    I am trying to connect to a SQL server database by giving table name in (SPVC) database as spvc.dbo.tablename.. If i want to access another database (SPVC_D), how can i do it without changing my java code?? Can i be able to add the database name to the connection string and i can remove the hard coding (spvc.dbo.) from my java code?? How shall i be able to do that..
    Please let me know.. I am very new with JDBC with SQL server..
    Thanks
    Praveen Padala

    In the MS SQL Server that I use, the following syntax
    spvc.dbo.tablename
    Breaks done as follows...
    <database>.<user>.<tablename>
    However connections are always to a database. Thus when one uses the above syntax it basically runs 'in' the one database and accesses another.
    Depending on various attributes of the tables, if and only if, a table called 'tablename' exists in a database called 'spvc_d' then if you connect to 'spvc_d' then you can access 'tablename' just by using 'tablename'.
    So if that is what you are doing you will not have a problem.

Maybe you are looking for