Monday, November 5, 2012

The terminal server has exceeded the maximum number of allowed connections

An established Remote Desktop connection to the server will remain active until the user "Log Off". Unlike if a user simply closes the remote desktop window that username will remain logged on.

The software running on the server in order to facilitate the Remote Desktop connection ,will allow up to two  administrative sessions simultaneously.

If a third attempt is made to login to the server, the error discussed here will be shown to the user, and they will be unable to complete the login process.

To get around this error and log into the server, you can log into a special session call the Console Session in order to Log Out the other connections.

To do this, simply type the following in a Start -> Run or Command Prompt.

mstsc /v:00.00.00.00 /admin

or
mstsc /v:00.00.00.00 /f -console

Replace 00.00.00.00 with your server's IP Address.

Alternatively you may also modify the Remote Desktop Shortcut by going to:

For Windows Vista/7: Click Start, type remote desktop. Right Click the shortcut, click properties.

In the target field paste the following command: %systemroot%\system32\mstsc.exe /admin

For Windows XP: Click Start, go to All Program > Accessories > Communications > Right click Remote Desktop Connection and click properties.

In the target field paste the following command: %systemroot%\system32\mstsc.exe /admin

Click OK to save the change.

Now type in the IP of your server and you should be able to login via the Console.

Once logged into the server, right click on your task bar and select 'Task Manager'from the list. When the 'Task Manager' comes up, select the 'Users' tab. In this window, you will see all of the users currently connected to the server. Select a disconnected user, and then click the 'Log off' button at the bottom to clear it.

Once this has been completed, log the session you are using off and you should be able to connect as normal.

Please use this session only to properly terminate the other two active sessions

Thursday, August 30, 2012

Configure session timeout in obiee 11g

The connection between the OracleBI Presentation Server and OracleBI Services when idle(no requests running) will expire by default in 210 minutes,this can be manually configured using the following steps:

1. Find the file instanceconfig.xml in following location

         OracleInstance\Config\OracleBIPresentationServices\Coreappication\instanceconfig.xml

2. Add the following line in the security block add "ClientSessionExpireMinutes" block with no of minutes you want.

                 <clientsessionexpireminutes> 1440 </clientsessionexpireminutes>

3. Restart services OracleBI Presentation Services.


Thursday, August 23, 2012

Display OutputLogicalSQL in Presentation Services for every Analysis

This is on of the most helpful options that helps a developer a lot.
We need to modify the instanceconfig.xml file to enable this feature.

Follow the below simple steps to enable it.

1.Find the instanceconfig.xml file take backup for safety.open the file.

2.Add the below XML code to the file and save it.

                     <TestAutomation>

                 <OutputLogicalSQL>true </OutputLogicalSQL>

                 </TestAutomation>



3.Restart the presentation services.


That's it,You are Done.

Monday, July 30, 2012

OBIEE Cache Hit.....When is a query quaified for cache hit

When caching is enabled, each query is evaluated to determine whether it qualifies for a cache hit. A cache hit means that the server was able to use cache to answer the query and did not go to the database at all. The Oracle BI Server can use the query cache to answer queries at the same or higher level of aggregation.




1.A subset of columns in the SELECT list must match:
All of the columns in the SELECT list of a new query have to exist in the cached query to qualify for a cache hit, or they must be able to be calculated from the columns in the query.


This rule describes the minimum requirement to hit the cache, but meeting this rule does not guarantee a cache hit. The other rules listed in this table also apply.

2.Columns in the SELECT list can be composed of expressions on the columns of the cached queries:
The Oracle BI Server can calculate expressions on cached results to answer the new query, but all the columns must be in the cached result. For example, the query:


SELECT product, month, averageprice FROM sales WHERE year = 2000

hits cache on the query:

SELECT product, month, dollars, unitsales FROM sales WHERE year = 2000

because averageprice can be computed from dollars and unitsales (averageprice = dollars/unitsales).

3.WHERE clause must be semantically the same or a logical subset:

For the query to qualify as a cache hit, the WHERE clause constraints must be either equivalent to the cached results, or a subset of the cached results.


A WHERE clause that is a logical subset of a cached query qualifies for a cache hit if the subset meets one of the following criterion:

•A subset of IN list values. Queries requesting fewer elements of an IN list cached query qualify for a cache hit. For example, the following query:

SELECT employeename, region

FROM employee, geography

WHERE region in ('EAST', 'WEST')

qualifies as a hit on the following cached query:

SELECT employeename, region

FROM employee, geography

WHERE region in ('NORTH', 'SOUTH', 'EAST', 'WEST')

•It contains fewer (but identical) OR constraints than the cached result.

•It contains a logical subset of a literal comparison. For example, the following predicate:

WHERE revenue < 1000

qualifies as a cache hit on a comparable query with the predicate:

WHERE revenue < 5000

•There is no WHERE clause. If a query with no WHERE clause is cached, then queries that satisfy all other cache hit rules qualify as cache hits regardless of their WHERE clause.

4.Dimension-only queries must be an exact match:
If a query is dimension only, meaning that no fact or measure is included in the query, then only an exact match of the projection columns of the cached query hits the cache. This behavior prevents false positives when there are multiple logical sources for a dimension table.



5.Queries with special functions must be an exact match:
Other queries that contain special functions such as time series functions (AGO, TODATE, and PERIODROLLING), limit and offset functions (OFFSET and FETCH), external aggregation functions, and filter metrics must also be an exact match with the projection columns in the cached query. In these cases, the filter must also be an exact match.


6.Set of logical tables must match:
To qualify as a cache hit, all incoming queries must have the same set of logical tables as the cache entry. This rule avoids false cache hits. For example, SELECT * FROM product does not match SELECT * FROM product, sales.


7.Session variable values must match, including security session variables:
If the logical SQL or physical SQL statement refers to any session variable, then the session variable values must match. Otherwise, the cache is not hit.


In addition, the value of session variables that are security sensitive must match the security session variable values that are defined in the repository, even though the logical SQL statement itself does not reference session variables.

8.Equivalent join conditions:
The resultant joined logical table of a new query request has to be the same as (or a subset of) the cached results to qualify for a cache hit.



9.DISTINCT attribute must be the same:
If a cached query eliminates duplicate records with DISTINCT processing (for example, SELECT DISTINCT...), then requests for the cached columns must also include the DISTINCT processing; a request for the same column without the DISTINCT processing is a cache miss.



10.Queries must contain compatible aggregation levels:
Queries that request an aggregated level of information can use cached results at a lower level of aggregation. For example, the following query requests the quantity sold at the supplier and region and city level:


SELECT supplier, region, city, qtysold
FROM suppliercity

The following query requests the quantity sold at the city level:

SELECT city, qtysold
FROM suppliercity

The second query results in a cache hit on the first query.

11.Limited additional aggregation:
For example, if a query with the column qtysold is cached, then a request for RANK(qtysold) results in a cache miss. Additionally, a query that requests qtysold at the country level can get a cache hit from a query that requests qtysold at the country, region level.



12.ORDER BY clause must be comprised of columns in the select list:
Queries that order by columns that are not contained in the select list result in cache misses.



13.Avoiding cache misses using advanced hit detection:
You can avoid some cache misses by setting the parameter USE_ADVANCED_HIT_DETECTION to YES in the NQSConfig.INI file. Advanced hit detection enables an expanded search of the cache for hits
































Oracle BI Query not Cached.....know the reasons

There are several reasons for obiee query not being cached.Here are most of them.
First of all are you sure its not cached.

To Check,

1.Open your RPD in online mode.
2.Under Manage you will find cache.
3.When you click on it Cache manager is popped up.
4.If you find the query in the window then you are wrong,its cached else there is a problem we have to fix.

Now lets see the possible reasons for this

1.Cache Enabled :
Was your cache enabled.
         To Know open the NQSCONFIG.INI file
         Under Cache Sction if 'YES' tihen everything is fine here.
         If no then make it YES.

2.Use of Non-cacheable SQL functions:

      If any of the Non-cacheable functions are used in the request then the query will not be cached.

       i.e CURRENT_TIMESTAMP, CURRENT_DATE, CURRENT_TIME, RAND, POPULATE

3.Non-cacheable Table:
             If the query for the request makes any connection to a non cacheable table then the query/results will not be cached even if all other tables are set to cacheable.


4.  Query execution Terminated:
While running the request on Presentation Server if the execution is forcibly cancelled or time out occurred, cache will not create.

5.Bad cache configuration:
If the MAX_CACHE_ENTRY_SIZE parameter is set to a bigger value as compared to the actual storage space (DATA_STORAGE_PATHS) available on the drive then nothing can possibly be added to the cache.

6.Result set to Large to Handle by Cache:
The max row limit for cache is set in a parameter in nqsconfig.ini, named MAX_ROWS_PER_CACHE_ENTRY (default 100,000 rows) and MAX_CACHE_ENTRY_SIZE (default 1 MB). If the rows retrieved in the request exceeds the MAX ROW LIMIT or the data retrieved by the request exceeds the MAX CACHE ENTRY SIZE then the cache will not be created.

7.When a Query got a cache hit:
In general, if the query gets a cache hit on a previously cached query, then the results of the current query are not added to the cache. The only exception is the query hits that are aggregate “roll-up” hits, will be added to the cache if the nqsconfig.ini parameter POPULATE_AGGREGATE_ROLLUP_HITS has been set to Yes.








Tuesday, June 19, 2012

Downtime Query for Oracle Production Scheduling

SELECT org.organization_code organization_code,
    crd.resources resources,
    crm.resource_desc resource_description,
    TO_CHAR(grum.from_date, 'DD-MON-YYYY HH24:MI:SS') start_time,
    TO_CHAR(grum.to_date, 'DD-MON-YYYY HH24:MI:SS') stop_time,
    floor(((grum.to_date-grum.from_date)*24*60*60)/3600) || ':' || ROUND((((grum.to_date-grum.from_date)*24*60*60) -     floor(((grum.to_date-grum .from_date)*24*60*60)/3600)*3600)/60) || ':' || ROUND((((grum.to_date-grum.from_date)    *24*60*60) - floor(((grum.to_date-grum .from_date)*24*60*60)/3600)*3600 - (ROUND((((grum.to_date-    grum.from_date)*24*60* 60) - floor(((grum.to_date-grum.from_date)*24*60*60)/3600)*3600)/60)*60) ))
    elapsed_time,
    grum.reason_code reason_code,
    mtr.description reason_description
FROM     org_organization_definitions org,
    cr_rsrc_dtl crd,
    CR_RSRC_MST crm,
    gmp_rsrc_unavail_man grum,
    MTL_TRANSACTION_REASONS mtr
WHERE org.organization_id = crd.organization_id
    --AND org.organization_code = NVL(:ORG_CODE,org.organization_code)
    AND crd.resource_id = grum.resource_id
    AND crd.resources = crm.resources
    --AND crd.resources = NVL(:RESOURCES,crd.resources)
    AND grum.reason_code = mtr.reason_name(+)
    --AND grum.reason_code = NVL(:REASON_CODE,grum.reason_code)
    AND (TO_CHAR(grum.from_date,'YYYY/MM/DD') >= '2012/02/22'
    AND TO_CHAR(grum.to_date,'YYYY/MM/DD') <= '2012/02/22')
     -- and (grum.to_date >= '0-jan-12' and grum.to_date <'04-jan-12')
    -- AND to_char(nvl(grum.from_date,SYSDATE),'YYYY/MM/DD HH24:MI:SS') between nv l    (:START_FROM_DATE,to_char(nvl(grum.from_date,SYSDATE),'YYYY/MM/DD HH24:MI:SS'))
    AND nvl(:START_TO_DATE,to_char(nvl(grum.from_date,SYSDATE),'YYYY/MM/DD HH24:MI: SS'))
    -- AND to_char(nvl(grum.to_date,SYSDATE),'YYYY/MM/DD HH24:MI:SS')
    between nvl (:STOP_FROM_DATE,to_char(nvl(grum.to_date,SYSDATE),'YYYY/MM/DD HH24:MI:SS'))
    AND nvl(:STOP_TO_DATE,to_char(nvl(grum.to_date,SYSDATE),'YYYY/MM/DD HH24:MI:SS'))
ORDER BY org.organization_code, crd.resources, grum.from_date

Ship To and Bill To Address in Order Management

select     ooha.order_number,
    msi.segment1,
    hcsu_ship.site_use_id,
    hcasa_ship.cust_acct_site_id,
    hps.party_site_id,
    hp.party_name "Customer Name",
    sota.name "Order Type",
    CONCAT(CONCAT(CONCAT(CONCAT(CONCAT(hl_ship.address1,hl_ship.address2),h         l_ship.address3),hl_ship.address4),hl_ship.state),hl_ship.country) "Ship To Address" ,
    round(oola.ordered_quantity2) "Ordered Quantity"

from oe_order_lines_all oola
    inner join OE_ORDER_HEADERS_ALL ooha on oola.header_id= ooha.header_id
--inner join oe_order_lines_all oola_shipto oola_shipto.ship_to_org_id = hcsu_ship.site_use_id
inner join so_order_types_all sota on ooha.order_type_id = sota.order_type_id
inner join HZ_CUST_ACCOUNTS hca on hca.cust_account_id = ooha.sold_to_org_id
inner join HZ_PARTIES hp on hp.party_id = hca.party_id
inner join mtl_system_items_b msi on oola.inventory_item_id = msi.inventory_item_id
    AND oola.ship_from_org_id = msi.organization_id
 --inner join HZ_CUST_SITE_USES_ALL hcsu_ship on oola.ship_to_org_id = hcsu_ship.site_use_id
inner join HZ_CUST_SITE_USES_ALL hcsu_ship on oola.invoice_to_org_id = hcsu_ship.site_use_id
inner join HZ_CUST_ACCT_SITES_ALL hcasa_ship on hcsu_ship.cust_acct_s ite_id = hcasa_ship.cust_acct_site_id i
nner join HZ_PARTY_SITES hps on hcasa_ship.party_site_id = hps.party _site_id
inner join HZ_PARTY_SITE_USES hpsu_ship on hpsu_ship.party_site_id = hp s.party_site_id
inner join HZ_LOCATIONS hl_ship on hps.location_id = hl_ship.location _id
where ooha.order_number = :ORDER_NUMBER

Sunday, February 12, 2012

Convert columns into rows and vice-versa in Excel

We can chage the rows to columns and vice-versa very easily.

Follow the steps and get the result.

Step 1: Copy what ever you want to change.

Step 2 : Right click where you want to paste and select Paste Special


Step 3: Select Transpose option and click ok.


Thats it you're done

Dynamically Change The Column Names in OBIEE 11g

We can change the column names dynamically using presentation variables
see how it works.

Step1: Create a Dashboard Prompt for the presentation variable.

Step 2: Place the prompt in the ashboard page and hide it.

Step 3: Change the column name as @{Presentation Variable} {Deafult value}

Note : 1.Prompt should be taken on any different column other than the columns in the page.

2. If you want n number of columns,create n number of prompts and hide them.

Hide a Section in OBIEE 11g

Hey we can hide a section on a dashboard page.

Follow the steps

Go to Section Properties

Go to Format section

In CSS Style Write as Dispaly:none


Save ans see Run The Page