HI!

I want to update the input value.

Already I have trained a model network.

I have a [16, ] input dims. Here, I want to use the 15th and 16th input values as PARAMETER.

I mean I want to update two parameters.

I think that I can get backward(gradient) to use the ‘compute_gradients’ function.

How can I use it?

Hey @Xim_Lee , not sure I understand your question.

You model - as it seems - has the following weight matrices (I’m assuming you mean weights by “parameters”?):

- Assuming that your first Dense layer has size 256 (the RLlib default).
- Your input has size 16, so the weight matrix would be a [16 x 256] one plus a [256] bias-vector.
- You could get to that matrix, or bias vector, by doing:

```
pol = trainer.get_policy()
weights = pol.get_weights()
# manipulate `weights`, which should be np.arrays
# then:
pol.set_weights([updated_weights])
```

Thanks for the reply @sven1977 !

The problems that I got are explained below the figure.

Could you possibly understand the explanation of the problems?