Distributed query using local BackingMap

I'm trying to create a distributed query (deriving a task from AbstractInvocable). I want each cluster node to query only it's local storage, aggregate a result and then return that single result to the invoker of the Task.
     To do this I'm using the BackingMap for a NamedCache and running the query on this. My question is, since this is a Map, how can I run the query using Filter objects?
     On the NamedCache (i.e. the entire cache) I can query with a Filter, but to achieve the same thing with a local BackingMap I have to iterate through the entire Map. What am I missing?!
     Many Thanks,
     Jools

Thanks Cameron.
     We can query the NamedCache using Filters - that's fine.
     What I'm trying to achieve though, is to use distributed tasks running on each node. The reason is that we're planning on using a widely-Distributed cache with nodes in both the US and the UK, and as such we're trying to minimise bandwidth usage by having the distributed task perform some aggregation on the local node and return only the result.
     However, if we can only get a Map interface on the local node, and therefore cannot use Filters (or indexes for that matter) and can only iterate through the contents of the Map looking for properties that match our criteria, then it may end up being too inefficient to do it this way.
     Are there no config settings than govern what type of object gets returned as the backing map on the local node?
     Thanks,
     Jools

Similar Messages

  • Avoid Distributed query in PL/SQL cursor code

    Hi,
    I have to avoid a distributed qry in my cursor code in PL/SQL.
    The query follows like this,
    cursor c1
    is
    select a.test,b.test1,a.test2
    from apple a,
    [email protected] b,
    bat c
    where a.listid = b.listid
    and a.list_name = c.list_name;
    Now i need to split the above cursor into two .
    (1)I need to query appl and bat which is from local database into one and
    (2)Have to do something for the value from [email protected] is stored in a temp. table or PL/SQL table.So that ,i can use the PL/SQL table or temp table in my join in cursor ,instead of having a distributed query.
    By doing so,will the performance hit badly ?
    [Note: Imagine this scenario is taking place in Oracle 11i Apps]
    Regards,
    Prasanna Natarajan,
    Oracle ERP Tech Team.

    [url http://groups.google.de/group/comp.databases.oracle.server/browse_frm/thread/df893cf9be9b2451/54f9cf0e937d7158?hl=de&tvc=1&q=%22Beautified%22+code+runs+slower#54f9cf0e937d7158]Recently somebody complained about slow performance after code was beatified in PL SQL Developer, after recompilation without flag "Add Debug Information" it run faster...
    (just a guess)
    Best regards
    Maxim

  • Query about local storage

    Hi,
         i had a query about local storage.
         I've a machine that hosts weblogic and tangosol. i've an ejb that accesses a distributed cache i.e NamedCache cache = CacheFactory.get("MyCache")
         i modified tangosol-coherence.xml and set local-storage to false ( for distributed cache) and replaced the file in coherence.jar.
         i'm using an overflow scheme and the back map uses a disk scheme.
         i also start a separate standalone instance of tangosol and i set the system property of local storage to true for the standalone instance.
         i start the standalone instance first and then weblogic.
         The idea is ensure that the tangosol instance in weblogic or the weblogic JVM should not participate in storing data (hence local storage false).
         only the JVM for the standalone instance should store data (hence local storage true -system property).
         i wanted to know whether the property "local-storage" is pertinent to a member(machine) or to a JVM?
         the reason for this doubt: as i'm using a disk scheme, tangosol creates a file for an overflow (e.g lh014402~.tp). i can see two such files when ideally i would have wanted only one for the tangosol instance.
         -rw-r--r-- 1 zephyr users 8364032 2005-06-23 17:02 lh014402~.tp
         -rw-r--r-- 1 zephyr users 8364032 2005-06-23 17:02 lh014403~.tp.
         can you please let me know if we can configure tangosol in such a way that we can two separate instances running with local stroage false for one and true for the other?
         Awaiting your reply
         Thanks
         Vinay

    I would suggest leaving the default 'local-storage' value set to 'true' in the tangosol-coherence.xml and just use the JVM argument to control the local storage of each individual node. Then start the stand alone instance normally (I assume you are using the com.tangosol.net.DefaultCacheServer) and start the WebLogic instance with the following:
         java [...] -Dtangosol.coherence.distributed.localstorage=false [...]
         Hope this helps.
         Later,
         Rob Misek
         Tangosol, Inc.

  • Cannot save query as local object

    Dear community,
    One of our users is not able to save a query change as a local object (not to be transported).  While another user is able to do this.
    Since we don't want the query changes to be transported, I must find a way to allow the save as local object in $TMP.
    An authorizations trace shows everything is OK from job role perspective.
    When the user changes a query in development and chooses "Save As", she is prompted for a transport.  At this time if she chooses "local object" then she is taken to the create a transport dialog.
    For another user this is not the case, and the 2nd user is able to save query changes locally without a transport request.
    Any thoughts on what might be the cause for this ?

    Hi Keith,
    without knowing the details of you system settings and the system where it happens (prod, dev, ...). One general remark:
    If you save a query lots of repository objects are generated an written to a transport request (i.e. structures, selections, ...), not only one for the query itself. Maybe in your case, the original query uses a globally defined structure in the columns or rows. If you now want to "save as" new query, many objects are checked for transportation. The global structure can not be saved as local object, because it was already transported before. Whilst the new query itself can be stored as local object. Means not all objects of the new query can be saved as local, because some are already transported.
    One way to check if this could also be your problem, both of your mentioned example users should try to save exactly the same query as new local query.
    Regards
    Adios

  • Peformance tuning of query using bitmap indexes

    Hello guys
    I just have a quick question about tuning the performance of sql query using bitmap indexes..
    Currently, there are 2 tables, date and fact. Fact table has about 1 billion row and date dim has about 1 million. These 2 tables are joined by 2 columns:
    Date.Dateid = Fact.snapshot.dates and Date.companyid = fact.companynumber
    I have query that needs to be run as the following:
    Select dates.dayofweek, dates,dates, fact.opened_amount from dates, facts
    where Date.Dateid = Fact.snapshot.dates and Date.companyid = fact.companynumber and dates.dayofweek = 'monday'.
    Currently this query is running forever. I think it is joining that takes a lot of time. I have created bitmap index on dayofweek column because it's low on distinctive rows. But it didn't seem to speed up with the performance..
    I'd like to know what other indexes will be helpful for me.. I am thinking of creating another one for companynumber since it also have low distinctive records.
    Currently the query is being generated by frontend tools like OBIEE so I can't change the sql nor can't I purge data or create smaller table, I have to what with what I have..
    So please let me know your thoughts in terms of performance tunings.
    Thanks

    The explain plan is:
    Row cost Bytes
    Select statement optimizer 1 1
    nested loops 1 1 299
    partition list all 1 0 266
    index full scan RD_T.PK_FACTS_SNPSH 1 0 266
    TABLE ACCESS BY INDEX ROWID DATES_DIM 1 1 33
    INDEX UNIQUE SCAN DATES_DIM_DATE 1 1
    There is no changes nor wait states, but query is taking 18 mins to return results. When it does, it returns 1 billion rows, which is the same number of rows of the fact table....(strange?)That's not a bitmap plan. Plans using bitmaps should have steps indicating bitmap conversions; this plan is listing ordinary btree index access. The rows and bytes on the plan for the volume of data you suggested have to be incorrect. (1 row instead of 1B?????)
    What version of the data base are you using?
    What is your partition key?
    Are the partioned table indexes global or local? Is the partition key part of the join columns, and is it indexed?
    Analyze the tables and all indexes (use dbms_stats) and see if the statistics get better. If that doesn't work try the dynamic sampling hint (there is some overhead for this) to get statistics at runtime.
    I have seen stats like the ones you listed appear in 10g myself.
    Edited by: riedelme on Oct 30, 2009 10:37 AM

  • Query the Local Computer Policy with PowerShell

    One of our applications requires some local computer policy settings for some services accounts and we wanted to be able to query these values with a Remote PowerShell window. 
    I was unable to find the registry keys that hold the Local Computer Policy and I also tried activating and importing the import-module grouppolicy but couldn’t figure out how to query the local policy. 
    Below are the values I am interested in seeing:  I also tried the posting here which was most like what I was looking for but no luck.
    SeAssignPrimaryTokenPrivilege(Replace a process-level token)
    SeImpersonatePrivilege (Impersonate a client after authentication)
    SeServiceLogonRight (Log on as a service)
    SeIncreaseQuotaPrivilege (Adjust memory quotas for a process)
    SeBatchLogonRight (logon as a batch job)
    https://social.technet.microsoft.com/Forums/scriptcenter/en-US/9fac4ebd-ab68-4ee9-8d5a-44413f08530e/wmi-query-for-user-rights-assignment-local-computer-policy?forum=ITCG
    Thanks,
    Chris
    Chris J.

    The local GPO is a bit tricky.  Administrative Templates go into a registry.pol file, but for the rest of the settings you see in the local GPO, they're just configured on the computer (generally in the registry somewhere.)  If you change, for
    example, the user rights assignments with ntrights.exe, you'll see those changes reflected in the local Group Policy object as well.  This different from domain GPOs, where there's an INF file that contains all the settings that aren't part of an administrative
    template registry.pol file.
    Regarding user rights assignments, there's no quick and easy way to get at this information that I'm aware of.  NTRights.exe makes it easy to change user rights assignments, but doesn't offer functionality to query the existing settings.  For that,
    you need to use the Win32 API function
    LsaEnumerateAccountsWithUserRight.  This can be done from PowerShell, but it involves some embedded C# code that uses P/Invoke... it's about the most complicated type of code you're likely to encounter in a PowerShell script.
    I tinkered around with this recently, and this code seems to work (though it's a little on the ugly side):
    # All of this C# code is used to call the Win32 API function we need, and deal with its output.
    $csharp = @'
    using System;
    using System.Runtime.InteropServices;
    using System.Security;
    using System.Security.Principal;
    using System.ComponentModel;
    namespace LsaSecurity
    using LSA_HANDLE = IntPtr;
    [StructLayout(LayoutKind.Sequential)]
    public struct LSA_OBJECT_ATTRIBUTES
    public int Length;
    public IntPtr RootDirectory;
    public IntPtr ObjectName;
    public int Attributes;
    public IntPtr SecurityDescriptor;
    public IntPtr SecurityQualityOfService;
    [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]
    public struct LSA_UNICODE_STRING
    public ushort Length;
    public ushort MaximumLength;
    [MarshalAs(UnmanagedType.LPWStr)]
    public string Buffer;
    [StructLayout(LayoutKind.Sequential)]
    public struct LSA_ENUMERATION_INFORMATION
    public IntPtr PSid;
    sealed public class Win32Sec
    [DllImport("advapi32", CharSet = CharSet.Unicode, SetLastError = true),
    SuppressUnmanagedCodeSecurityAttribute]
    public static extern uint LsaOpenPolicy(LSA_UNICODE_STRING[] SystemName,
    ref LSA_OBJECT_ATTRIBUTES ObjectAttributes,
    int AccessMask,
    out IntPtr PolicyHandle);
    [DllImport("advapi32", CharSet = CharSet.Unicode, SetLastError = true),
    SuppressUnmanagedCodeSecurityAttribute]
    public static extern uint LsaEnumerateAccountsWithUserRight(LSA_HANDLE PolicyHandle,
    LSA_UNICODE_STRING[] UserRights,
    out IntPtr EnumerationBuffer,
    out int CountReturned);
    [DllImport("advapi32")]
    public static extern int LsaNtStatusToWinError(int NTSTATUS);
    [DllImport("advapi32")]
    public static extern int LsaClose(IntPtr PolicyHandle);
    [DllImport("advapi32")]
    public static extern int LsaFreeMemory(IntPtr Buffer);
    public class LsaWrapper : IDisposable
    public enum Access : int
    POLICY_READ = 0x20006,
    POLICY_ALL_ACCESS = 0x00F0FFF,
    POLICY_EXECUTE = 0X20801,
    POLICY_WRITE = 0X207F8
    const uint STATUS_ACCESS_DENIED = 0xc0000022;
    const uint STATUS_INSUFFICIENT_RESOURCES = 0xc000009a;
    const uint STATUS_NO_MEMORY = 0xc0000017;
    const uint STATUS_NO_MORE_ENTRIES = 0xc000001A;
    IntPtr lsaHandle;
    public LsaWrapper()
    : this(null)
    // local system if systemName is null
    public LsaWrapper(string systemName)
    LSA_OBJECT_ATTRIBUTES lsaAttr;
    lsaAttr.RootDirectory = IntPtr.Zero;
    lsaAttr.ObjectName = IntPtr.Zero;
    lsaAttr.Attributes = 0;
    lsaAttr.SecurityDescriptor = IntPtr.Zero;
    lsaAttr.SecurityQualityOfService = IntPtr.Zero;
    lsaAttr.Length = Marshal.SizeOf(typeof(LSA_OBJECT_ATTRIBUTES));
    lsaHandle = IntPtr.Zero;
    LSA_UNICODE_STRING[] system = null;
    if (systemName != null)
    system = new LSA_UNICODE_STRING[1];
    system[0] = InitLsaString(systemName);
    uint ret = Win32Sec.LsaOpenPolicy(system, ref lsaAttr,
    (int)Access.POLICY_ALL_ACCESS,
    out lsaHandle);
    if (ret == 0) { return; }
    if (ret == STATUS_ACCESS_DENIED)
    throw new UnauthorizedAccessException();
    if ((ret == STATUS_INSUFFICIENT_RESOURCES) || (ret == STATUS_NO_MEMORY))
    throw new OutOfMemoryException();
    throw new Win32Exception(Win32Sec.LsaNtStatusToWinError((int)ret));
    public SecurityIdentifier[] ReadPrivilege(string privilege)
    LSA_UNICODE_STRING[] privileges = new LSA_UNICODE_STRING[1];
    privileges[0] = InitLsaString(privilege);
    IntPtr buffer;
    int count = 0;
    uint ret = Win32Sec.LsaEnumerateAccountsWithUserRight(lsaHandle, privileges, out buffer, out count);
    if (ret == 0)
    SecurityIdentifier[] sids = new SecurityIdentifier[count];
    for (int i = 0, elemOffs = (int)buffer; i < count; i++)
    LSA_ENUMERATION_INFORMATION lsaInfo = (LSA_ENUMERATION_INFORMATION)Marshal.PtrToStructure(
    (IntPtr)elemOffs, typeof(LSA_ENUMERATION_INFORMATION));
    sids[i] = new SecurityIdentifier(lsaInfo.PSid);
    elemOffs += Marshal.SizeOf(typeof(LSA_ENUMERATION_INFORMATION));
    return sids;
    if (ret == STATUS_ACCESS_DENIED)
    throw new UnauthorizedAccessException();
    if ((ret == STATUS_INSUFFICIENT_RESOURCES) || (ret == STATUS_NO_MEMORY))
    throw new OutOfMemoryException();
    throw new Win32Exception(Win32Sec.LsaNtStatusToWinError((int)ret));
    public void Dispose()
    if (lsaHandle != IntPtr.Zero)
    Win32Sec.LsaClose(lsaHandle);
    lsaHandle = IntPtr.Zero;
    GC.SuppressFinalize(this);
    ~LsaWrapper()
    Dispose();
    public static LSA_UNICODE_STRING InitLsaString(string s)
    // Unicode strings max. 32KB
    if (s.Length > 0x7ffe)
    throw new ArgumentException("String too long");
    LSA_UNICODE_STRING lus = new LSA_UNICODE_STRING();
    lus.Buffer = s;
    lus.Length = (ushort)(s.Length * sizeof(char));
    lus.MaximumLength = (ushort)(lus.Length + sizeof(char));
    return lus;
    Add-Type -TypeDefinition $csharp
    # Here's the code that uses the C# classes we've added.
    $lsa = New-Object LsaSecurity.LsaWrapper
    $sids = $lsa.ReadPrivilege('SeInteractiveLogonRight')
    # ReadPrivilege() returns an array of [SecurityIdentifier] objects. We'll try to translate them into a more human-friendly
    # NTAccount object here (which will give us a Domain\User string), and output the value whether the translation succeeds or not.
    foreach ($sid in $sids)
    try
    $sid.Translate([System.Security.Principal.NTAccount]).Value
    catch
    $sid.Value
    You do need to know the proper string for each user right, and they are case sensitive. 
    Edit:  You can get a list of right / privilege names from
    https://support.microsoft.com/kb/315276?wa=wsignin1.0 ; they're the same values used for NTRights.exe.

  • Distributed Query Overhead

    Hi All,
    I have a distributed deployment of two oracle instances where database A keeps a replication of a schema from database B.
    I have A and B linked together, B sees A as a remote database, and my application sends queries to database B.
    Let's say I have the following two queries:
    The following is issued to B:
    select * from magic.accountejb@A a where a.profile_userid = ( select userid from magic.accountprofileejb@A ap where ap.userid = 'uid:174')
    and the following issued directly to A (which is basically the same query as above):
    select * from accountejb a where a.profile_userid = ( select userid from accountprofileejb ap where ap.userid = 'uid:174')
    when I measure the time through my Java application, the second query executes more than 3 times faster than the first query (23ms on A compared to 80ms on B). However, when I use the sqlplus client on B to issue the exact same query, the execution time reported by sqlplus is almost identical to the second one(20ms).
    When I monitor the execution plan through *@UTLXPLAN*, it seems like the query sent to B is also fully executed remotely and on A. with a network latency of 11ms between A and B, I am not sure why I see such a long delay for the first query. Also playing with DRIVING_SITE did not have any perceived effect on improving performance.
    I wonder if anybody has any explanation for the difference I see? is a distributed query really 3 times slower than a regular query even though both are pretty much handled by the same database engine? or is it so that I need some other sort of tuning?
    Any thoughts or advice on how I can achieve comparable performance is highly appreciated.
    thanks!
    Edited by: 944957 on 16-Nov-2012 20:25
    Edited by: 944957 on 16-Nov-2012 20:29

    Thanks a lot for the quick response:
    rp0428 wrote:
    1. the 4 digit Oracle version (or other DB version)
    2. the JDK version
    3. the JDBC jar name and versionI am using ojdbc14 with Oracle 11g XE and JDK 7.
    4. the code you are using that shows an issue or problem.The queries I am using is basically the two queries I provided earlier, and here is the exact Java code. I loop over the code below 20 times and discard the first two retrieved results for each query and calculate an average on the remaining 18 results collected.
    static Connection c1 = null, c2 = null;
    static Statement _session;
    public void getStats(){
    long start;
    for (int i = 0; i < 20; i++) {
    c1 = (c1 != null) ? c1 :
         DriverManager.getConnection("jdbc:oracle:thin:@//" + System.getProperty("host.1")+"/XE", "magic", "magic");
    _session = c1.createStatement();
    session.executeUpdate("ALTER SESSION SET CURRENTSCHEMA=magic");          
    start = System.currentTimeMillis();
    _session.executeUpdate(query);
    values[0] = System.currentTimeMillis() - start;     
    _session.close();
    c2 = (c2 != null) ? c2 :
         DriverManager.getConnection("jdbc:oracle:thin:@//" + System.getProperty("host.2")+"/XE", "magic", "magic");
    _session = c2.createStatement();
    _session.executeUpdate("ALTER SESSION SET CURRENT_SCHEMA=magic");          
    start = System.currentTimeMillis();
    _session.executeUpdate(distQuery);          
    values[1] = System.currentTimeMillis() - start;     
    _session.close();
    } // end for loop     
    } // end method
    5. for performance related issues - the data volume being queried or processedThe data volume is rather small. I measure the data and it is roughly about 10K of data transfer.
    >
    Without seeing the code to see how you are measuring the timing it is hard to comment on what you posted.
    3. How was the timing computed in sql*plus?for sqlplus, I issue *set timing on* prior to executing the queries.
    4. Were the connections already created before the timing started? Or is the creation of the connection part of the timing result?As you see in the code, the connection is only created the first time I issue a query, and I discard the results of the first two queries using the connection as the timing is far off specially for the first query. I think the first query also download some metadata information that I don't consider in calculating the performance.
    5. Do the timings include the retrieval of ALL result set data? Or just the first set of results?the time only consists of executing the first set.
    Can you post the explain plans for the java and the sql*plus executions?Here is the results of the explain plan
    PLAN_TABLE_OUTPUT
    Plan hash value: 3819315806
    | Id | Operation          | Name          | Rows | Bytes | Cos
    t (%CPU)| Time     | Inst |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT REMOTE |               |     1 |     43 |
    2 (0)| 00:00:01 |     |
    | 1 | TABLE ACCESS BY INDEX ROWID| ACCOUNTEJB     |     1 |     43 |
    2 (0)| 00:00:01 | CORONA |
    |* 2 | INDEX RANGE SCAN     | ACCOUNT_USERID     |     1 |     |
    1 (0)| 00:00:01 | CORONA |
    |* 3 | INDEX UNIQUE SCAN     | PK_ACCOUNTPROFILEEJB |     1 |     9 |
    0 (0)| 00:00:01 | CORONA |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    2 - access("A1"."PROFILE_USERID"= (SELECT "A2"."USERID" FROM "MAGIC"."A
    CCOUNTPROFILEEJB"
    PLAN_TABLE_OUTPUT
         "A2" WHERE "A2"."USERID"='uid:174'))
    3 - access("A2"."USERID"='uid:174')
    Note
    - fully remote statement
    Edited by: 944957 on 16-Nov-2012 20:51
    Edited by: 944957 on 16-Nov-2012 20:53
    Edited by: 944957 on 16-Nov-2012 20:55
    Edited by: 944957 on 16-Nov-2012 20:56
    Edited by: 944957 on 16-Nov-2012 20:57
    Edited by: 944957 on 16-Nov-2012 20:59

  • Distributed Query in-parallel?

    I tried to execute distributed query on three database machines. I hope it would be executed in-parallel. Unfortunately, I found I used 3 times of execution time with distirbuted query compare to search in only one node. Obviously, the query statement was processed one by one. (Data was distributed in three nodes averagely)
    Here is the example.
    select * from nemo.seven_dis_table
    union
    select * from [email protected] where rownum<=1000
    union
    select * from [email protected] where rownum<=1000
    With the docuemnt http://www.dba-oracle.com/t_opq_parallel_query.htm, this sql would be executed on two remote site in-parallel. My question is, why can not be executed in-parallel in my case? Is it really parallelism query?

    I'm by no means an expert in distributed queries, but having read through the document you linked and looking at your query, I think you're misunderstanding either distributed processing or the union statement.
    At the moment, all you're doing is getting data from three sources and then mashing it together with a costly union statement.
    If you put in UNION ALL instead, it would give you a better idea of how fast it retrieves data because it won't bother sorting and removing duplicated records.
    The idea behind distributed processing is that you can query a very large table on ONE database using multiple processors, not multiple databases on different servers....

  • Distributed query for license key

    Hello,
    I have a distributed query that I'm hoping is retrieving the license key information below:
    USE master
    GO
    create table #version
    version_desc varchar(2000)
    insert #version
    select @@version
    if exists
    select 1
    from #version
    where version_desc like '%2005%'
    Begin
    DECLARE @Registry_Value_2005 VARCHAR(1000)
    EXEC xp_regread 'HKEY_LOCAL_MACHINE','SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup','ProductCode',@Registry_Value_2005 OUTPUT --2005
    SELECT @@version as 'version',@Registry_Value_2005 as 'license_key'
    End
    else if exists
    select 1
    from #version
    where version_desc like '%express%'
    Begin
    DECLARE @Registry_Value_2008_express VARCHAR(1000)
    EXEC xp_regread 'HKEY_LOCAL_MACHINE','SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\Setup','ProductCode',@Registry_Value_2008_express OUTPUT -- 2008 express
    SELECT @@version as 'version',@Registry_Value_2008_express as 'license_key'
    End
    else if exists
    select 1
    from #version
    where version_desc like '%R2%'
    Begin
    DECLARE @Registry_Value_2008_R2 VARCHAR(1000)
    EXEC xp_regread 'HKEY_LOCAL_MACHINE','SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\Setup','ProductCode',@Registry_Value_2008_R2 OUTPUT -- 2008 R2
    SELECT @@version as 'version',@Registry_Value_2008_R2 as 'license_key'
    End
    else if exists
    select 1
    from #version
    where version_desc like '%2008%'
    Begin
    DECLARE @Registry_Value_2008 VARCHAR(1000)
    EXEC xp_regread 'HKEY_LOCAL_MACHINE','SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\Setup','ProductCode',@Registry_Value_2008 OUTPUT -- 2008
    SELECT @@version as 'version',@Registry_Value_2008 as 'license_key'
    End
    else if exists
    select 1
    from #version
    where version_desc like '%2012%'
    Begin
    DECLARE @Registry_Value_2012 VARCHAR(1000)
    EXEC xp_regread 'HKEY_LOCAL_MACHINE','SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL11.MSSQLSERVER\Setup','ProductCode',@Registry_Value_2012 OUTPUT -- 2012
    SELECT @@version as 'version',@Registry_Value_2012 as 'license_key'
    End
    else
    Begin
    select 'version not recognized'
    End
    drop table #version
    I'm noticing the 'key' is coming back the same across our 2012 instances and I'm pretty sure this isn't right. Am I retrieving the right value from the registry? I want to get the actual key that is installed when SQL is installed. Please help also feel
    free to borrow this code if you like.
    Thanks!
    phil

    Hi phil,
    The following query will return the serial number in binary format, you can convert this binary value to product key as other post. For more details, please review this similar
    blog.
    use master
    GO
    exec xp_regread 'HKEY_LOCAL_MACHINE','SOFTWARE\Microsoft\Microsoft SQL Server\110\Tools\Setup','DigitalProductID'
    GO
    Regarding to the product code, SQL Server consists of different products registered in the Registry. Each product has a product code (a GUID) as well as an installation package code (also a GUID). For more details, please review this similar
    thread.
    Additionally, for license issues, please call
    1-800-426-9400,
    Monday through Friday, 6:00 A.M. to 6:00 P.M. (Pacific Time) to speak directly to a Microsoft licensing specialist. For international customers, please use the Guide to Worldwide Microsoft Licensing Sites to find contact information in your locations.
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • Using local login while RADIUS is running

    Hello,
    I would like to configure our switches to use the local login while RADIUS is working. Currently the switch just looks to the server to authenticate, so the local account will not work unless RADIUS is down. Here is our current config:
    username networkteam privilege 15 password 7 0337572B035E95412B211F50
    aaa new-model
    aaa authentication login default local
    aaa authentication login NetworkAuth group radius local
    aaa authorization exec NetworkAuth group radius local
    aaa session-id common
    line vty 0 4
    exec-timeout 30 0
    privilege level 15
    authorization exec NetworkAuth
    logging synchronous
    login authentication NetworkAuth
    transport input ssh
    line vty 5 15
    transport input none

    Hi,
    lemme make it simple.
    The following is your configuration :
    aaa new-model
    aaa authentication login default local
    aaa  authentication login NetworkAuth group radius local
    aaa authorization  exec NetworkAuth group radius local
    aaa session-id common
    line vty 0 4
    authorization exec  NetworkAuth
    login authentication NetworkAuth
    transport input ssh
    line vty 5 15
    transport input none
    This means that When you try login to the switch, the first 5 sessions will head for authentication to radius server because of the following configuration:
    aaa  authentication login NetworkAuth group radius local
    aaa authorization  exec NetworkAuth group radius local
    line vty 0 4
    authorization exec  NetworkAuth
    login authentication NetworkAuth
    But when you have a 5th Session for the switch the authentication will happen locally because of the following configuration:
    aaa authentication login default local
    The default method list gets applied to the line vty, console and auxillary if no specific method is mentioned.
    hence you can use local authenticatin for the session after 5.
    Hope this helps.
    Regards,
    Anisha
    P.S.: please mark this post as answered if you feel your query is resolved. Do rate helpful posts.

  • ABAP query using logical database KDF is not populating custom fields

    Hi Experts ,
    I created two following queries
    1.       VENDORCATKDF – uses KDF logical database
    2.       VENDORCATLFA1 – uses table = LFA1
    I’m pulling the same information in both queries:
    ·         Vendor Number
    ·         Country
    ·         Vendor Name
    ·         Vendor Category  (custom fields added to LFA1)
    The results for the query that uses the logical database KDF is incorrect.  It doesn’t pull in the flag on the custom field LFA1-ZMRO.   Even though the logical database KDF is made up of the table LFA1 and has these fields. 
    Is there something that can be done – so that all of these “custom” category fields under LFA1 (such as LFA1-ZZMRO) – get pulled into queries – when we use the logical database KDF ?

    Hi,
    I have got the error removed by ensuring that fields from one table are a part of one line ( taking help of ruler) only. But the underlying problem remains, the output is not ALV but List output.
    I do not think having additional fields in the query is reason for this.
    Is it bcoz iI am adjusting the output length of columns to ensure no hierarchical error ?
    Can we not have a query using LDB which is shown as SAP List?
    Regards,
    Garima.

  • Can I use iCloud as my library instead of using local storage?  Would I be able to synchronize the music on my phone? and make CDs out of the my music in the icloud using itunes?

    Can I use iCloud as my library instead of using local storage?  Would I be able to synchronize the music on my phone? and make CDs out of the my music in the icloud using itunes?

    Many thanks JEM24 for your help.  Ive just spent the best part of six hundred pounds on a new Sony Rx100m2 compact camera, so I have no interest in the Ipods camera at all really. I doubt Ill be watching many videos on it as Im very lucky in that I have a good Android tablet. Its more as a stock music player that Ill be buying the Ipod for, if indeed I do end up buying one. I dont like the idea of paying the exorbitant amount added for more memory space that Apple along with most other companies charge. In fact I read an article on this very subject just yesterday in the tech section of Flipboard. It stated in the article that in the case of the Iphone  the actual cost of each additional  gigabyte of storage  to Apple et al is something in the order of 60p.. This is certainly not reflected in the price us the customer has to pay at the till.. Its for this reason primarily that Apple in particular, because their products do not allow adding expandable memory of your own in the form of cheap to buy cards, that nobody in their right mind buys the 64gig etc Iphones..I am aware that we are discussing my potential purchase of an Ipod Touch here but you see my point. Many thanks again though for helping me.

  • Problem using local variable in event loop

    I have a state machine from which I want to monitor various controls, including "Start" and "Stop" buttons.  Not every state needs to monitor the controls.  At present, most states run timed loops.  In the first state that reads the front panel, I have an Event structure (inside a While loop) that monitors the various controls' Change Value events.  For numeric controls, I update variables (in shift registers) as needed.  The "Start" button is used to end the While loop controlling the Event structure, allowing the State to exit to the next state.
    My problem comes in subsequent states that employ this same idea.  Here, I put a Local Variable bound to the Start button and use the same code, but it frequently happens that when I enter this particular state, I cannot "turn on" the control -- I push the button, but it stays off.  Curiously, if it was On when I enter, I can turn it off, but then I'm stuck not being able to turn it on.
    I mocked up a very simply routine that illustrates this.  There are two sequences (corresponding to the two states).  Both use an Event loop with a local variable bound to my Stop button (really this is an LED control with custom colors).  I've deliberately moved the "initialization" (the declaration of the control in the block diagram) out of the Event loops -- putting it inside the first loop modifies the behavior in another strange way.
    Here's my thinking on how I would expect this to work:  The code outside Event Loop 1 should have little effect.  Assume the Stop button is initially Off.  You will "sit" in Event Loop 1 until you push the Stop button, changing its value to True; this value will be passed out of the Event case and cause the first While loop to exit.  You now enter the second sequence.  As I understand the Exit tunnel, it defaults to "False", so I'd expect to stay in the second Event loop until I turn the Stop button from On to Off, which will pass out a False, and keep me in the While for one more button push.  However, this doesn't happen -- I immediately exit, as though the "True" value of the Stop local variable is being seen and recognized by the Event loop (even though it hasn't changed, at least not in the context of this second loop).
    An even more curious thing occurs if I start this routine with the Stop button turned on.  Now I start in my Event loop waiting for a change, but this time the change will be from On to Off, which won't cause an exit from the frame.  This will be reflected by having the While loop count increment.  We should now be in the state of the example above, i.e. in an Event loop waiting for the control to be pushed again, and turned On.  However, clicking the control has no effect -- I cannot get it to "turn on".
    Where am I going astray in my thinking?  What is it about this method of doing things that violates the Labview paradigm?  As far as I can tell, what I'm doing is "legal", and I don't see the flaw in my reasoning, above (of course not -- otherwise I'd have fixed it myself!).  Note that because I'm using local variables inside Event loops (and I'm doing this because there are two places in my code where I want to do such testing), the Stop control is not latching (as required).  Is there something that gets triggered/set when one reads a latched control?  Do I need to do this "manually" using my local variable?
    I'll try to attach the simple VI that illustrates this behavior.
    Bob Schor
    Attachments:
    Simple Stop Conundrum.vi ‏14 KB

    altenbach wrote:
    Ravens Fan wrote:
    NEVER have multiple event structures that share the same events. 
    Actually, that's OK.  NOT OK is having multiple event structures in the same sequence structure.
    See also: http://forums.ni.com/ni/board/message?board.id=170&message.id=278981#M278981
    That's interesting.  I had always thought I read more messages discouraging such a thing rather than saying it was okay.  Your link lead me to another thread with this message. http://forums.ni.com/ni/board/message?board.id=170&message.id=245793#M245793.  Now that thread was mainly concentrating on registered user events which would be a different, but related animal. 
    So if you have 2 event structures they each have their own event queue?  So if you have a common event, one structure pulls it off its event queue and it does not affect the other structure's event queue?  I guess the inherent problem with this particular VI was that the second event structure locked the front panel.  Since the code never got to that 2nd event structure because the  first loop never stopped because the change was from true to false.  After reading your post and the others, I did some experimentation and turned off the Lock front panel on the 2nd structure, and that prevented the lockup of the program.
    Overall, the example VI still shows problems with the architecture and I think your answer should put the original poster on the right track.  I think as a rule I would probably never put the same event in multiple structures, I feel there are better ways to communicate the same event between different parts of a program,  but I learned something by reading your reply and about how the event structures work in the background.  Thanks.

  • Getting an error when i am execution a BI query using ABAP.

    Hi Expert,
    I am getting an error when i am execution a BI query using ABAP. Its Giving me this Error "The Info Provider properties for GHRGPDM12 are not the same as the system default" and in the error analysis it saying as bellow.
    Property Data Integrity has been set differently to the system default.
    Current setting: 0 for GHRGPDM12
    System default: u2019 7 u2018
    As I am very new to BI and have very limited knowledge, so I am not able to understand this problem. Can any one help me to resolving this issue. Previously it as working fine, I am getting this error last 2 days.
    when i am debugging , I am getting error from
    create instance of cl_rsr_request
    CREATE OBJECT r_request
    EXPORTING
    i_genuniid = p_genuniid.
    this FM. Its not able to create the object. Can any one please help me out.
    Thanks in advance.
    Regards
    Satrajit

    Hi,
    I am able to solve this problem
    Regards
    Satrajit

  • Investment mgt:Error while distributing budgets using IM52 transaction code

    I am getting an error message"Availability control can not be activated for hierarchial projects" when I distribute budgets using IM52  transaction code in Investment management.
    Can you please tell me why and how to solve it?
    Edited by: aravind  reddy on Aug 19, 2008 4:34 PM

    I am getting an error message"Availability control can not be activated for hierarchial projects" when I distribute budgets using IM52  transaction code in Investment management.
    Can you please tell me why and how to solve it?
    Edited by: aravind  reddy on Aug 19, 2008 4:34 PM

Maybe you are looking for

  • Multiprovider error

    Hi all, I am creating a Multi Provider (Z_M05) on top of a Cube (say Z_C05). While activating it it gives error message " Compounding Consistency of the Info Object xxxx not ensured". Can anyone suggest what could be done to solve this issue. Thanks

  • HP Laserjet 1020 Plus not friendly with MS Excel ???

    Hi, I am having OS Window 8. I am able to print thru HP 1020 PLUS MS Word docs but not able to print MS Excel sheets as Laserjet printer does not appear in printers list. While setting it as default printer devices, i am getting error message "operat

  • DBMS_XMLQuery and DBMS_XMLSave package

    I have Oracle 8.1.7 Where can I get DBMS_XMLQuery and DBMS_XMLSave package using samples The Documentation I have doesn't contain any.

  • SB Audigy Audio [9400] Drv

    I cannot find drivers for my SB sound card, i've tryed every driver on the download page and they all say 'Setup could not detect any sound blaster audio card on your system, please ensure you SB hardware is properly installed'

  • Strange files on my ipod

    I need some help! I keep getting strange files on my ipod, when I view it as a hard drive there crazy files with names like !66jyk!k very strange and it just crashes out of the blue and I have to reformat it every couple of days. I have had every gen